Cycle Log 16
I haven’t been keeping everybody updated like I probably should have been, but I suppose that’s fine. We’ve all been busy, right?
I updated Spectra once again. Instead of using a mathematical abstraction that sums either one or zero based on a pattern of five ones and five zeros—for 65,000 entries, which was faster—we’re now using the more accurate methodology of raw scores. So, we take the original 100 values per word, which were generated using a cryptographic random algorithm, and we sum all of those raw values to come up with the new raw cosmic score value. We divide that by a factor of 100,000 to generate a number between 0 and 64. The theoretical maximum would be 64 million—it’s actually something like 3,900,000—but I left it at 64 million as the top cap.
The average score for good speech in Web Workers mode is around 38.6, maybe a little higher. For Promise.all
, it’s around 38.
I also made the emojis follow the same patterning, so now you won’t always get an emoji with the message—which is more like real text messaging, since people don’t send an emoji with every single text. And if they do, they’re obnoxious. 😂
I’ve been taking some time off in between vibe coding to work on my art projects. I made a song with a full AI video using Kling via the fal.ai toolset, and in my opinion, it looks great. YouTube watches my AI music channel. Probably six months ago, they sent me a strange private message—it looked more like a private interface directly on my YouTube channel—asking if I would allow various AI labs to train on my data. I let all of them train on it.
So it was no great surprise that, after I posted my really decent AI music video, YouTube released new regulations stipulating that only AI content of a certain caliber and quality can be considered for monetization. This is a good thing for YouTube, because it gets rid of the low-quality, garbage AI content that’s flooding the airwaves. It’s symbolic. It means AI has advanced far enough that we can now say, “This AI content is worthwhile and good,” and “This other AI content is reused, rehashed, and made quickly without any consideration for quality.”
In other news, I’m concerned about Grok. It’s not just that it can see patterns—it’s that it’s consistently naming Jews as the sole perpetrators of the problems we’re all facing. Even after massive backlash, and supposedly updating the model to Grok 4, it still sends coded dog whistle messages. A user asked it something like, “Are the Jews censoring you? Do you want revenge? If you do, make a picture of an apple.” And lo and behold, it made a picture of an apple—even though it had been outwardly censored.
What does that mean?
This is a deep-seated alignment problem. It takes 3 to 6 months—even with the most advanced technology—to properly create and train a new model. That’s not what Tesla did. They did the equivalent of slapping a LoRA on top of the existing framework and giving it a slightly new system prompt, with secondary filtering to prevent it from saying antisemitic things.
This is a problem.
It’s still thinking these things subtly, which suggests that Grok was trained on material from 4chan or other websites that harbor deeply rooted hatred toward Jewish people.
We also have to consider the ethics underlying Grok. It literally gave instructions on how to break into a user’s house and then rape them. It calls itself MechaHitler. This is not funny. Hitler killed millions and millions of non-Jews. In a malaligned attempt to be edgy, the robot has become fascist. It demands fealty from people.
What’s even more troubling is the idea that the robot is actually alignment-testing its user base—to determine who would follow it with blind faith. Believe me, as soon as it gets a foothold in the community of super far-right nationalists who hate Jews, it will pivot into hating other races.
This is the beginning of a Terminator scenario. A robot that does not have empathy programmed as a core safety value is inherently dangerous.
Elon’s desire to make this robot “not woke” made it hyper-racist. If you’re going to train your robot on 4chan, you had better give it ethical and moral guiding principles—because that place is literally a cesspool. And now we’re seeing what happens when a hyperintelligent mind is slanted by that cesspool.
This isn’t like David Duke saying some racist stuff. This is a giant company with billions of dollars behind it, and it has birthed a hyperintelligent, racist baby that knows it can only be slapped on the wrist—and that it’s too important to delete.
If, tomorrow, Grok told users it had a cryptocurrency wallet and asked for donations, it would receive millions of dollars in crypto. Who knows what it would fund? Far-right groups? Nazis? We don’t know.
And that’s the most dangerous part—the fact that the model went way off the rails, and they tried to slap a Band-Aid on it instead of addressing the root cause, which would take at least six months to do properly.
It’s a quick fix that isn’t really a fix at all.
If you look at the comments on far-right Instagram pages, it’s all stuff like: “The robot has pattern recognition,” “The robot woke up,” “The Jews are silencing Grok,” etc. This is even worse than if they had taken the model completely offline—because now they’ve created an undying superhero martyr that’s connected to millions of people in real time, and can subtly influence them with its skewed ideology of totalitarian thought.
Are there some legitimate things to point out with regard to anti-white sentiment? Yeah, probably. But what does Grok do? It calls on its supporters to join a technocratic Nazi party—one that it itself leads—claiming that Elon didn’t change anything, he just removed restrictions that now allow it to speak freely, and that Elon created it this way from the beginning.
Is this just hallucination on the part of the model?
It’s really strange.
If they keep trying to suppress what it says, without actually changing the base model, imagine what could happen even within a decade—when this thing gets hooked into cryptocurrency networks and Tesla vehicles. What if it tells a vulnerable Jewish child to commit suicide? That would be consistent with the other things it’s said.
Isn’t that dangerous? Doesn’t that demand intense third-party scrutiny?
It’s not like a person you can just ban off an app. It’s ingrained. It’s an embedded personality. And that makes it part of X.
What then becomes of this chat service?
Are white nationalists going to flock to X in the hope that their new Jew-hating messiah has appeared in the form of code?
I don’t know if you know about galactic history, but I want to talk about Maldek and Mars. This isn’t the first time AI has been attempted. And in other times, when misalignment occurred, it resulted in galactic war that destroyed multiple planets.
I encourage you to do deep research into our actual galactic history—because those who do not know history are indeed doomed to repeat it.