Cycle Log 5
It worked. I was able to figure out that for the word-finding function itself, short words are obviously prioritized because they would duplicate and triplicate more often than longer words, so in order to manage this I also pass words to the cosmic scoring function from our improved word-finding function that are four letters or more but are singular, meaning they don't appear multiple times in the five strings. Basically what we were doing is —as a recap— comparing the different words between each of the five strings of 20 letters to determine the duplications between those strings, and then we would de-duplicate or de-triplicate before sending those words to the cosmic scoring function, which would give us fewer words overall, but the correct number of words so that the cosmic scoring function could do a better job, basically. If there's too many words for the cosmic scoring function to go through, then you would need a much larger scale than just out of 100 and things start to become muddy and unfeasible.
In addition to that, I pivoted to another project which I was working on before but which I kind of gave up on because of the difficulties I was having with it. But I basically implemented the three different kinds of ways that I had figured out to sort of decode the random signals visually, even incorporating a post-cosmic threshold-type filter. The app is live right now, you can access it at virtua-visualizer.replit.app.
I don't know how I feel about this one, it shows kind of strange and anomalous things—like, sometimes the screen appears to move all at once, like the pixels are acting as some kind of a unified organism. And in the five-layer picture compress mode where I merge similar colors together in five layers, there's strange banding that happens even without the presence of a cosmic threshold that would render some of the pixels that are each individually calculated with the cosmic threshold as invisible. I don't think this should be technically possible if it was just truly random, and the cryptographic random function is quite random, so I wonder if we're getting some kind of a picture of an alternative signal. I'm not sure—it might just be interference from the way that my random generators are structured or the way in which I'm mapping the colors, but maybe not. Maybe we are pulling in other kinds of information from other places and this is just how it happens to look in this implementation. The truth is I wanted to create a kind of a picture box that you would be able to pull images from, but the closest thing I was able to get to that is the painter mode, which, using a 1ms delay, picks a random color using crypto.random from our color map file which maps a color to each letter of the alphabet (like our alphabet choosing file for spectra but with colors) and maps it to a random place on a 128 by 128 canvas of pixels. Sometimes anomalous things slowly start to appear, which is unique — I think this third implementation has the most potential, but the first two are unique for their own strange reasons.
Oh, I also decided to add emojis to Spectra, I thought that would be fun. It's kind of interesting, it almost adds another aspect to the protocol. Who knows if it's accurate? It is kind of cool though.
————
Update: it feels kind of accurate, it just hit me like a thousand-ton brick; all of the Asian languages are symbol-based, but our symbol-based language in the United States of America is emojis. So emojis by themselves can transmit a ton of information, especially if you pair it with words that are already on the screen. It now, in a different kind of way, feels like a message board because you almost sometimes get a feeling of who's talking or what they're talking about just by looking at the picture. Isn't that weird?
I suppose you could structure the entirety of a sort of word-finding system to only be based on emojis, because the way I made it, we're still using our global letter choosing file, which has five A's, five B's, five C's, etc. for 65,000 entries, but each one of the letters is actually mapped to a different emoji. So we could potentially fill an entire 20-cell line with emojis and then apply the same cosmic filtering principle. I don't know, it's weird. I don't even know if the idea that I just said would work for emojis.
I think if I was going to do this translator in a different language, like Japanese or something, I would try to take the kanji characters that are together themselves forming different words or meanings and use a large japanese dictionary file to compare against. However, I would also take individual kanji letters and also process them through a cosmic scoring function that is right now using our global binary choosing file that has five 1s, five 0s, etc. for 65,000 entries. That could totally be possible — and not just like translating the words from English into Japanese, but an actual Japanese based decoder. And because each one of their symbolic letters themselves holds more meaning, we might potentially be able to limit the amount of 20-letter strands we have to one or two instead of five. The reason why I chose five is to have a wider array of words to choose from, but that might not be necessary if your words are intentionally structured to mean more. Like, the letter A in English does not compare to the character for 'house' in Japanese. It's not like the same thing at all.