It's fun (errr... for me at least) to translate the ancient basic code into a modern implementation and play around.
The article mentions that it's interesting how the 2d functions can look 3d. That's definitely true. But, there's also no reason why you can't just add on however many dimensions you want and get real many-dimensioned structures with which you can noodle around with visualizations and animations.
These techniques are the key unlocks to robustifying AI and creating certifiable trust in their behavior.
Starting with pre-deep neural network era stuff like LQR-RRT trees, to the hot topic today of contraction theory, and control barrier certificates in autonomous vehicles
Has progress stalled in this area? I don't know, but surely there are people working on it. In fact I recently saw an interesting post on HN about a new technique that among other things enables faster estimation of Lyapunov exponents: https://news.ycombinator.com/item?id=45374706 (search for "Lyapunov" on the github page).
Just because we haven't seen much progress, doesn't mean we won't see more. Progress never happens on a predictable schedule.
People use chaos theory to make predictions about attractor systems that have lower error than other models.
And then it sort of fizzled out, because while it's interesting and gives us a bit of additional philosophical insights into certain problems, it doesn't do anything especially useful. You can use it to draw cool space-filling shapes.
To @esafak I suggest following @westurner’s post.
I like the concept of Stable Manifolds. Classifying types of them is interesting. Group symmetries on the phase space are interesting. Explaining this and more is not work I’m prepared to do here. Use Wikipedia, ask ChatGPT, enrol in a course on Chaos and Fractal Dynamics, etc.
As a counter, I found that if you add an incorrect statement or fact that lies completely outside the realm of the logic-attractor for a given topic that the output is severally degraded. Well more like a statement or fact that's "orthogonal" to the logic-attractor for a topic. Very much as if it's struggling to stay on the logic-attractor path but the outlier fact causes it to stray.
Sometimes less is more.
I've only skimmed it but it very much looks like what I've been imagining. It'd be cool to see more research into this area.
1: https://news.ycombinator.com/item?id=45427778 2: https://towardsdatascience.com/attractors-in-neural-network-...
> It may diverge to infinity, for the range (+- 2) used here for each parameter this is the most likely event. These are also easy to detect and discard, indeed they need to be in order to avoid numerical errors.
https://superliminal.com/fractals/bbrot/
The above image shows the overall entire Buddhabrot object. To produce the image only requires some very simple modifications to the traditional mandelbrot rendering technique: Instead of selecting initial points on the real-complex plane one for each pixel, initial points are selected randomly from the image region or larger as needed. Then, each initial point is iterated using the standard mandelbrot function in order to first test whether it escapes from the region near the origin or not. Only those that do escape are then re-iterated in a second, pass. (The ones that don't escape - I.E. which are believed to be within the Mandelbrot Set - are ignored). During re-iteration, I increment a counter for each pixel that it lands on before eventually exiting. Every so often, the current array of "hit counts" is output as a grayscale image. Eventually, successive images barely differ from each other, ultimately converging on the one above.
Is it possible to use the Buddhabrot technique on the lyapunov fractals ?
I've personally found the technique very versatile and have had a lot of fun playing around with it and exploring different variations. Was excited enough about the whole thing that I created a website for sharing some of my explorations: https://www.fractal4d.net/ (shameless self-advertisement)
With the exception of some Mandelbrot-style images all the rest are produced by splatting complex-valued orbit points onto an image in one way or another.