THINGS I DON'T KNOW BUT CAN LEARN
Can I Classify Particle Tracks in Cloud Chamber?:
Try and build an ML model for classification of particle tracks. Building a small benchtop cloud chamber with some peliter coolers and set up a camera system. Log photos, classify the particle tracks and have the model classify them correctly. Am I able do some statistical analysis on what gets captured?
Games with Physarum polycephalum:
Physarum polycephalum is a single celled organism that displays an interesting amount of "intelligence". Able to find shortest distances between locations and remember the locations of food sources. I would like to see what additional stimuli either have a positive or negative effect on the ability of the organism to compute shortest paths. (Light, sound, temperature, food types).
REALM OF UNSOLVABLE OR CURRENTLY UNSOLVED
Randomness of Particle Decay:
All particles decay, (except protons...maybe) and the reasons for this decay are not understood. The decay process is fundamentally random. But it is random to me in a way that is unlike random processes like waves or dice rolls. A dice roll is not fundamentally random. With perfect knowledge of all the initial physical conditions you could predict on which side the die would land.
We do not understand how a given particle decides to decay. There is no specific reason for a unique particle to decay. We can only understand the system stochastically. You could answer that it is just a fundamental property of the quantum world, and it is unknowable, or maybe it is not even something that is a meaningful question.
Picture of Anything and Upper Bound of Knowledge:
Imagine you have a 4k image grid. If you iterated through all possible grid color combinations, you could theorhetically have a picture of anything that can be captured in the visible spectrum. All knowledge that could be written out or visually displayed would exist in this magic image. That would also include images of things that have not yet existed. What percentage of that is random noise? Is there a way to put contraints on that?
\(\text{Possible RGB Pixels: }256^3 = 16277216\)
\(\text{ 4k Image Pixels: }3840 \times 2160 = 8294400 \)
\(\text{Possible Combinations of Pixels: } 8294400^{16277216} \)
import math
p_width = 3840
p_height = 2160
rgb_space = 256 * 3
base = p_width * p_height
exponent = rgb_space
log10_value = exponent * math.log10(base)
mantissa = 10 ** (log10_value - int(log10_value))
exponent_part = int(log10_value)
print(f"Approximation: {mantissa:.5f}e+{exponent_part}")
\(\text{ Approximate number of images: } 4.23508 \times 10^{5313}\)
This number represents the the upper limit of the information that can be contained inside an image of this size. It's not even understandably large. The number of atoms in the entire observable universe \(10^{80}\). That being said, we could probably put some contstraints on the image information content.
Not all of the information would be unique. There would be images that are almost the same as another image. You could dump a large number of these similar images.You would also have a minimum legible size for any human readable font.
If there is a theory of everything in physics that is documentable, it would exist inside of this image space.
Another wild concept is that there would be a portrait of any human who has ever lived and will ever live.
I am going to expand on this in a seperate post.