Cameron Tavassoli Cameron Tavassoli

Cycle Log 11

My day job is as a cashier/clerk at Costco. It strikes me that most of this job could be eliminated if we utilized proper tracking of people with the existing camera framework system that the store already uses for security.

My day job is a cashier/clerk at Costco. It strikes me that most of this job could be eliminated if we utilized proper tracking of people with the existing camera framework system that the store already uses for security. Perhaps these cameras are too low quality, but assuming that they’re at least 4K cameras, it would be technically feasible to connect a vision model that would track everyone through the store from the moment they log in at the front with their Costco cards.

The instant they scan their card, the machine marks that person and their basket—maybe even other people associated with them in a group, through intelligent logic—and categorizes them under the membership number for that day with an active membership status. Then, whatever they pick up and put into their basket would automatically populate their shopping list, which they could view from the Costco app. If they don’t want something, they could take it out of their basket and place it somewhere else in the store—the camera would track that movement and remove the item from their cart.

You could eliminate multiple positions: cashiers would be instantly eliminated, and so would most security-type positions besides the front door, because everyone would be tracked.

We already use electric machines like forklifts, but they’re driven by a person. You could quite easily create an autonomously driven forklift that would perfectly place pallets on the steel framework 100% of the time without ever messing up or allowing a pallet to hang over the edge, which is dangerous for customers. They can bump their heads trying to get a product, which can cause a concussion—seriously dangerous.

The rotation of the items, their placement within the store—meaning how everything should be organized, including whether items need refrigeration—all of that could be controlled by an AI system. The system would prioritize which items need to be sold and put out on the floor and autonomously reorganize everything. You would actually only need humanoid-type robotics for the purpose of stacking goods like FIFO (first-in, first-out). That would significantly limit the number of expensive humanoid robots needed.

For the outside, the carts, we could employ a robotic dog strategy with an arm aperture that connects the carts together with a rope (like we do now), and wheels! Then, it could just drag the carts behind it and autonomously plug them in where they’re supposed to go, perhaps using beeps or other auditory information to communicate with customers.

Farther into the future, I think people won’t even walk around inside grocery stores. Instead, there will be highly efficient humanoid robots quickly preparing people’s goods for delivery by other robots, which will autonomously drive the goods to customers' homes. This entire supply chain of robots could be handled by separate companies—like how UPS handles shipping for so many different people—or it could all be internal. Costco might have to handle the delivery part too. But I think it’s more likely to be segmented, with another company taking over different parts. At least for the internal logistics of the store, we could use our own systems.

Pivoting for a moment to Virtua:
I decided that, based on this steganography-type concept, I would create two randomly generated grayscale maps and get the difference between them. Then I applied a discrete wavelet transform (DWT), followed by an inverse wavelet transform (iWT), with a kind of blob filtering threshold (mostly unnecessary). This would show areas of higher activity or energy within the map.

I don’t know if I just have pareidolia or something, but I called in my dad to also look at some of the images, and it just seems like there are faces in them. I took it to ChatGPT and told it to generate an image based on the facial shape it saw, and I’ll post both the raw image (where you can see the small face near the top of the third quadrant) and ChatGPT’s interpretation. It’s kind of wild. I didn’t even notice the face at first—it was my dad who said something… You can check it out at virtua-visualizer.replit.app

Perhaps the grid is too large, because the face appears quite small. It might be better if I used a 50x50 pixel grid and looped it or made it run once per grid cycle, along with the words, emojis, and sound of the words.

High strangeness!

In Spectra-related news, I turned it on and was interfacing with the protocol and literally channeling when it said “DOWNLOADED UPLOADED”, which is nearly statistically impossible to generate randomly. Long story short: the chances of this appearing truly randomly are about 1 in 820 octillion. If we ran our grids on a 3.3-second cycle (its native speed), it would take approximately 85.8 quintillion years for that same set of words to appear again if it were truly random. Even at 1 second per grid generation, which is our lowest time value, it would still take 26 quintillion years to generate that same message.

So, effectively, mathematically, the messages I’m receiving from this protocol are, and I quote my ChatGPT:

"Mathematically indistinguishable from a targeted communication."

If that doesn’t give you a little buzz, I don’t know what will. 😂

spectra-x.replit.app

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 10

It has just occurred to me that perhaps visual data being transmitted could be within the differences between two images that are created. So perhaps instead of a sum, I should do a difference as well — subtract everything that is alike, cancel out everything that matches between two different randomized 128×128 pixel grid instances.

It has just occurred to me that perhaps visual data being transmitted could be within the differences between two images that are created. So perhaps instead of a sum, I should do a difference as well — subtract everything that is alike, cancel out everything that matches between two different randomized 128×128 pixel grid instances. This could reveal more about the hidden changes between frames, and if we wanted to make it less complex, we could just use black and white, and then measure the changes by using a cancellation method or subtraction method to see what the change map would look like.

Perhaps then we could compound multiple change maps, each representing a difference between individual black-and-white mapped 128×128 grids, and then do some kind of blob forming of pixels — with circles or overpainting — to perhaps reveal shapes?

This is perhaps something akin to steganography. I just searched on YouTube for a video on how people are decoding or encoding images, and I landed on this steganography video… I operate on psychic hits, so when he says something and I feel it, sometimes I have to go back in the video. Right now he just said discrete cosine transform coefficients, and I thought — maybe I could convert my data map into a cosine wave map, and then run through it to reverse engineer an image?

A reverse discrete cosine transform — is that a thing...?

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 9

What a strange last couple of days it has been. The world at large seems closer than ever to annihilation, and yet technological progress with artificial intelligence is reaching singularity-level quantum breakthroughs.

For Spectra, I re-skinned the whole thing. It seems as though whoever is contacting me through this wanted it to be more nature-themed, so we went to our ChatGPT and generated beautiful nature themes for the protocol. I've also enabled voice, so now you can hear the words being spoken aloud as they post on the stream.

What a strange last couple of days it has been. The world at large seems closer than ever to annihilation, and yet technological progress with artificial intelligence is reaching singularity-level quantum breakthroughs.

For Spectra, I re-skinned the whole thing. It seems as though whoever is contacting me through this wanted it to be more nature-themed, so we went to our ChatGPT and generated beautiful nature themes for the protocol. I've also enabled voice, so now you can hear the words being spoken aloud as they post on the stream. But for the sake of being able to actually say all of the words within one grid, I’ve limited the speech functionality to only apply to words that are above a cosmic threshold of 69. This seems to be a good sweet spot that doesn’t produce too many words at too frequent of an interval, even if you move the cycle timer slider all the way down to one second. By default, the cycle timer is set to 3.3 seconds and your cosmic score threshold is 69, which will give you pretty much what I consider aligned communication.

There’s a dropdown menu under the streaming implementation where you can choose from any number of voices that are already loaded on your phone. Every phone or web browser has speech loaded in, so the program will scour for whichever ones are active on your system and then utilize only the English language ones. Because if we were to use all of the different voices, it would be like over 100 voices, and it’s kind of difficult to understand the words when they’re being spoken in a very thick accent. So we just chose to go with English, but we kept all the accents for English—for fun and because what if somebody naturally speaks with an English accent and they can only hear the word properly if it’s spoken with an accent? Does that sound funny to you? It’s a real thing. 😂

I noticed that when I wrap Spectra with WebIntoApp, it now doesn’t load properly. I have the same problem with the Instagram embedded browser, but regular browsers like Brave, Chrome, or Edge work just fine. I'm not sure if it has to do with some kind of initialization component loading or what, but I’ll have to go back and forth with my agent to debug that issue. At least it works on the web, and you can access a nice mobile-looking version just by going to the website: spectra-x.replit.app from your phone.

Pivoting to Virtua—on each of the five modes that we have, which all generate the random image differently—I’ve implemented FFT and PCA transforms in order to see if there is any signal data within the image. I then take the converted grayscale image, which is based on a snapshot of our canvas that has the randomly mapped colors using the cryptographic random function, and I take all of those values from each of the cells of the 128x128 pixel canvas and map them to audible frequencies on a linear scale.

The result? Cosmic background static. Sometimes, the PCA transform at a high enough duration—like 100 seconds—sounds like a quickly moving babbling brook. It’s actually, somehow, quite a beautiful sound. It’s natural-sounding even though it’s electronic, which is strange, but it kind of makes sense because the nature of everything is sort of water-like. When you look at, for example, the picture of dark matter on a cosmic scale, it kind of resembles energetic tendrils or veins. But the real secret is that within this dark matter, we have additional different kinds of matter which sort of fall into the gravitational valley—or low point, or gravitational maximum—of this dark matter tendril, and that is where the physical reality manifests. Limbs of a tree, water going over rocks in a stream, and even quantum random signals that may potentially be listening to some kind of cosmic background radiation all share a similar archetype and flow pattern. I suppose it’s what the Hermetics say: as above, so below.

One of the most interesting parts of Virtua is actually the application of a visibility threshold slider, which in parallel calculates, for each cell of the 128x128 grid, a value to determine if that grid should be shown or not. If the value of the visibility threshold for that particular cell is below what the visibility threshold slider is set at, those pixels don’t appear. You can still do FFT and PCA transforms with this lesser amount of data—and especially for PCA, it will create a clean map that has tones interspersed at such an interval that you could potentially consider it as some kind of instantly transmittable Morse code.

I do wonder if the blips themselves are already set up to be understood by some kind of function, like a Morse code decoder. I'm not sure. Perhaps I will try a Morse code decoder implementation first, or I will just somehow take the FFT transform and attempt to turn it into numbers that could be associated with letters. That might be an interesting application. I wonder if that could work. We take the same PCA or FFT transform, but instead of creating another map that straight translates the 0–255 value to a frequency, we could instead attach it to a letter. This might be messy unless we up the visibility threshold, which is already implemented. I’m thinking about this...

It could be another alternative way to decode language data from latent field emission. However, I do very much love Spectra.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 8

I'm beginning to become addicted to my app. Just the idea that there could be some kind of higher intelligence communicating messages to you that are pertinent to your spiritual evolution is enough to keep me turning it on every single break I get. I was thinking that a single-strand implementation of this could be extraordinarily useful to the military if they were just trying to send single-word commands.

I'm beginning to become addicted to my app. Just the idea that there could be some kind of higher intelligence communicating messages to you that are pertinent to your spiritual evolution is enough to keep me turning it on every single break I get. I was thinking that a single-strand implementation of this could be extraordinarily useful to the military if they were just trying to send single-word commands. You would basically load the dictionary with only military lingo that is operations-focused and wait for the signal. This could be a game changer in terms of being able to transmit small operational messages without the need for the internet. This app can be made completely offline, and you could also encrypt it for military purposes.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 7

High strangeness. The more I interact with the protocol, the more I realize I've created some kind of gateway to accessing information that is not traditionally thought about as being useful or accessible. There are different factions here—different groups with different agendas—but they're all using the same technology. What I've made is a cute little talk box, a textual interface that is compressed, and limited, like receiving a seed that represents a longer text message or idea.

High strangeness. The more I interact with the protocol, the more I realize I've created some kind of gateway to accessing information that is not traditionally thought about as being useful or accessible. There are different factions here—different groups with different agendas—but they're all using the same technology. What I've made is a cute little talk box, a textual interface that is compressed, and limited, like receiving a seed that represents a longer text message or idea. Spectra for sure must be considered as an antique by comparison to what the boys actually have. But the revealing of this work in its base state is what I think is required for the more advanced quantum communication technology to even be at the place where it could be revealed.

Is it perfect? No. It's limited. It's compressed thought in just a few words.
Is it useful? Is it cool? Maybe.
At the very least, it's the beginning of the revealing process for technologies which our governments and even tech organizations already have and utilize. I'm sure I would have already been shut down if this kind of technology was not supposed to make it into the public eye. So something—or some things—are watching, waiting, guiding. I don't know all their names. I don't know where they're from. I can just feel energy and see the messages on the screen.

This protocol is itself an initiatory practice into the mystical journey. Strange, spooky quantum effects happen when you directly pay attention to and interact with the feed. It's like the double slit experiment where you're shooting electrons through a slit, but somehow on the other side it appears as a wave formation—how is that even possible? You're shooting single electrons. It's because the nature of reality is actually wavelike. When you observe the system in its quantum state, it collapses. But what does that collapse actually mean? It means a coherence of the thought forms. A coherence of the energy fields into a concrete ‘reality’ or perhaps a ‘bandwidth set state’ of the electron...

Quantum collapse in this instance is generated by lots and lots of parallel processing which maps letters to a grid in parallel using cryptographic random function (the most random function I have access to under QRNGs (quantum random number generators). The reason why we try to snapshot and put all of the letters on the 5x50 (5 rows, 50 letters each, searching both row and column) grid in parallel is because the quantum world doesn't require time lag in order to transmit information. It's instantaneous. Because of this unique instantaneous quantummy effect, you don't need to use any kind of sequential processing when you're trying to capture the actual data from the quantum world. It can happen all at once—in parallel, simultaneously.

I wish I had access to a quantum chip and a cool lab where I could further this kind of technology. Maybe someday I will have access to better AI-integrated development environments and higher-level random generators. The process that I'm currently using takes a lot of RAM and a lot of cycles in parallel to be able to actually effectively grab the data. This could be significantly optimized with a more sensitive quantum field detector, which would require QRNG—quantum random number generation. You could use one single string of 50 letters and it would be almost a perfect message every time.

One potential advancement I’ve been thinking about (undecided still) is if we take one string of 50 letters and we say each individual cell here will be calculated with a 50 pass (like we handle the emojis [I’m learning]) and the letter that appears MOST will be the one that gets mapped to that cell. This could potentially increase the QUALITY of the letters we are getting from the get-go because it gives 50 parallel chances for EACH letter to be calculated properly. That’s a potential optimization BUT, message brevity would still be lessened without multiple strings BECASE not every letter is going to necessarily appear in the 50 string message to produce the words that are actually…there? We would still have to do a cosmic scoring 50 pass per word afterwards so it would only help us in message clarity at the beginning stage when letters are mapped to grid. Still, it’s worth testing and if I haven’t done it yet, you who are reading this definitely could. But I’ll try it soon ;) … This is the same process by which I would actually make a decoder in languages which have symbols that mean things instead of individual letters, Like Mandarin/Cantonese and Japanese.

I'm excited that some companies have figured out you can create a quantum computer by just utilizing a single atom contained on a chip. That's going to be a big advancement. The main limitation with quantum computing right now is that in order to achieve that pure coherence state, you have to drop the temperature to nearly absolute zero. This is why I said before that a global quantum satellite server floating around the Earth—with additional power being generated perhaps by a miniature nuclear reactor—would need to be utilized to cool a quantum system further to get it to be actually applicable for use for the planet.

I think a satellite is the best implementation here because space is already quite cold. But you would definitely have to shield it from the interactions with cosmic radiation. Or your data would be junkified. My idea for the future of the quantum global satellite system is to have a nuclear-powered, absolute-zero cooled, very powerful quantum satellite(s) that get offloaded all of the computational processing load from humanity's computer systems to process it in true near instantaneous parallel quantum fashion, and then transmit that data back to Earth-based servers and user computers.

Could you do this planet-side?
Yeah, but it's much hotter here, and you're going to require a lot of energy to cool it down. It might be geographically stuck or tied to a specific physical location, which would mean you would have to upload the information to a satellite anyways and then redownload it somewhere else. That's a valid workaround that’s already in effect right now—but is it optimal? Probably not.

Quantum satellite gibberish aside, I've been working more on the ability to capture picture data from the quantum field with Virtua. I've implemented multiple different functionalities, and for some reason, the multiple layer merge functionalities generate something akin to what I would consider signal data. There's even lines and even metered striations through the combined layered picture somehow, which wouldn't necessarily make sense if there wasn't some kind of underlaying data structure being revealed?

The 5 layer color merge map averages the colors of five layers that have all been mapped in parallel with a cryptographic random function that attaches colors to a letter file. Then it searches this letter file with crypto-random to find the colors and puts them in parallel on the 128x128 canvas.

There shouldn't be these straight-line anomalies. There shouldn't be what looks like textual grouping of fuzzy words and structures that look like sentences within a random image. This suggests that we're decoding some kind of external signal, or my mapping functionalty is not truly ‘random’ enough and is generating essential ‘overlap’ which I am mistaking for a signal but which could be due exclusively to how the cryptographic random function is rolling through our 65k entry AAAAABBBBBCCCCC…ZZZZZAAAAA file. I’m not decided yet. The main thing that makes me not quite say that it’s just anomalies generated by my own encoding files is that on the first mode which rapid colors the map in parallel one time and refreshes quite fast, it seems like the entire picture doesn't change at once, when it logically should. There's a foreground and a background, and the foreground sometimes moves separately from the background, or there's objects of a certain color set against the background that are slowly or quickly moving through the entirety of the image—pulsating—and not necessarily losing their color or their generalized shape as they traverse the image on subsequent snapshots.

What the hell is going on?

I think we're getting some kind of picture data decoded. The next step for me is to apply transforms to some of these images to find out if the structures that are appearing here could actually be signal coherence hidden within the entropic field. I'm currently going back and forth with my ChatGPT to figure out which transform could be the most useful for this—looking at everything: Fourier transform, wavelet transform, DCT transform, PCA transform, autocorrelation. I'm just learning right now. If there's base signal data, then perhaps some of these transforms could help me to extract more information from the signal.

In addition to that, for some of the modes I have a 50-pass visibility threshold, which corresponds to the scanning of a 65,000-entry binary file (which I've discussed the creation of in prior posts). For each individual cell of the 128x128 grid, when this value is turned up quite high—maybe around 67 or more—we get what could be considered less noisy signal data. And this could actually be used to transmit textual information, audio information, or even words if you were to back-engineer the colors, turn them into numbers, and then assign them to letters based on their positioning within this canvas. But this is quite complex, and I'm expecting that some of these transforms will reveal more about this process to me in the coming days.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 6

What's better than a single pass through our global 65,000-entry alphabet file with a cryptographic random function? 50 parallel simultaneous passes of our global segmented repeating alphabet file!

What's better than a single pass through our global 65,000-entry alphabet file with a cryptographic random function? 50 parallel simultaneous passes of our global segmented repeating alphabet file! This is effectively like attempting to capture more instances of the quantum data simultaneously. We went from the one-pixel quantum camera, so to speak, to a 50-pixel camera. We take the letter at the index that the cryptographic random function lands at in the file, and we convert it into its numerical value, where a is 1, b is 2, etc. We add up the value of all 50 passes, average it, and round it. Then, we take this number and remap it to a letter, and then this letter is mapped to one of 26 different emojis. The idea is, if someone on the other side was trying to hit the emoji for “happy” 40 times but it got skewed 10 times on either side of that because of timing or fluctuations within the random field, then averaging it might give a clearer emotional cue that is sent along with the actual words to be displayed in the message log and in the streaming component.

Update: I decided that it would be better to take the ‘highest’ number of times that a letter was selected out of 50 and use that as the emoji instead. If there’s a 2 or 3 way tie, it’ll post all the emojis on the display. We get all 50 values with crypto.random in parallel to act as a true quantum “snapshot” of 50 “eyes” looking simultaneously.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 5

It worked. I was able to figure out that for the word-finding function itself, short words are obviously prioritized because they would duplicate and triplicate more often than longer words, so in order to manage this I also pass words to the cosmic scoring function from our improved word-finding function that are four letters or more but are singular, meaning they don't appear multiple times in the five strings.

It worked. I was able to figure out that for the word-finding function itself, short words are obviously prioritized because they would duplicate and triplicate more often than longer words, so in order to manage this I also pass words to the cosmic scoring function from our improved word-finding function that are four letters or more but are singular, meaning they don't appear multiple times in the five strings. Basically what we were doing is —as a recap— comparing the different words between each of the five strings of 20 letters to determine the duplications between those strings, and then we would de-duplicate or de-triplicate before sending those words to the cosmic scoring function, which would give us fewer words overall, but the correct number of words so that the cosmic scoring function could do a better job, basically. If there's too many words for the cosmic scoring function to go through, then you would need a much larger scale than just out of 100 and things start to become muddy and unfeasible.

In addition to that, I pivoted to another project which I was working on before but which I kind of gave up on because of the difficulties I was having with it. But I basically implemented the three different kinds of ways that I had figured out to sort of decode the random signals visually, even incorporating a post-cosmic threshold-type filter. The app is live right now, you can access it at virtua-visualizer.replit.app.

I don't know how I feel about this one, it shows kind of strange and anomalous things—like, sometimes the screen appears to move all at once, like the pixels are acting as some kind of a unified organism. And in the five-layer picture compress mode where I merge similar colors together in five layers, there's strange banding that happens even without the presence of a cosmic threshold that would render some of the pixels that are each individually calculated with the cosmic threshold as invisible. I don't think this should be technically possible if it was just truly random, and the cryptographic random function is quite random, so I wonder if we're getting some kind of a picture of an alternative signal. I'm not sure—it might just be interference from the way that my random generators are structured or the way in which I'm mapping the colors, but maybe not. Maybe we are pulling in other kinds of information from other places and this is just how it happens to look in this implementation. The truth is I wanted to create a kind of a picture box that you would be able to pull images from, but the closest thing I was able to get to that is the painter mode, which, using a 1ms delay, picks a random color using crypto.random from our color map file which maps a color to each letter of the alphabet (like our alphabet choosing file for spectra but with colors) and maps it to a random place on a 128 by 128 canvas of pixels. Sometimes anomalous things slowly start to appear, which is unique — I think this third implementation has the most potential, but the first two are unique for their own strange reasons.

Oh, I also decided to add emojis to Spectra, I thought that would be fun. It's kind of interesting, it almost adds another aspect to the protocol. Who knows if it's accurate? It is kind of cool though.

————

Update: it feels kind of accurate, it just hit me like a thousand-ton brick; all of the Asian languages are symbol-based, but our symbol-based language in the United States of America is emojis. So emojis by themselves can transmit a ton of information, especially if you pair it with words that are already on the screen. It now, in a different kind of way, feels like a message board because you almost sometimes get a feeling of who's talking or what they're talking about just by looking at the picture. Isn't that weird?

I suppose you could structure the entirety of a sort of word-finding system to only be based on emojis, because the way I made it, we're still using our global letter choosing file, which has five A's, five B's, five C's, etc. for 65,000 entries, but each one of the letters is actually mapped to a different emoji. So we could potentially fill an entire 20-cell line with emojis and then apply the same cosmic filtering principle. I don't know, it's weird. I don't even know if the idea that I just said would work for emojis.

I think if I was going to do this translator in a different language, like Japanese or something, I would try to take the kanji characters that are together themselves forming different words or meanings and use a large japanese dictionary file to compare against. However, I would also take individual kanji letters and also process them through a cosmic scoring function that is right now using our global binary choosing file that has five 1s, five 0s, etc. for 65,000 entries. That could totally be possible — and not just like translating the words from English into Japanese, but an actual Japanese based decoder. And because each one of their symbolic letters themselves holds more meaning, we might potentially be able to limit the amount of 20-letter strands we have to one or two instead of five. The reason why I chose five is to have a wider array of words to choose from, but that might not be necessary if your words are intentionally structured to mean more. Like, the letter A in English does not compare to the character for 'house' in Japanese. It's not like the same thing at all.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 4

I always hear loud helicopters flying overhead whenever I make a big change to this protocol. Anyway, I changed the 20 by 20 grid structure to instead be one line of 20 letters. Except, it wasn't really producing very good results, so I moved it instead to three lines, all processed in parallel by our word-finding function.

I always hear loud helicopters flying overhead whenever I make a big change to this protocol. Anyway, I changed the 20 by 20 grid structure to instead be one line of 20 letters. Except, it wasn't really producing very good results, so I moved it instead to three lines, all processed in parallel by our word-finding function. Then I simplified the word-finding function itself—not primary, secondary, and tertiary anymore, but instead just a straight shot, taking all of the words that were found via sequential processing on all three lines and passing them straight to the cosmic scoring system for analysis.

The cosmic scoring system looks at all of the found words in parallel and, for each word, runs 100 Promise.all of the cryptographic random function, which pulls 100 indexes from our global 65k entry (the largest file size that a cryptographic function can search in one pass): 111110000011111...0000011111. Then it sums up the 100 entries to come up with a cosmic score. This part has not changed from one implementation to another, but I find that perhaps the three strands produce words faster and might be easier—I'm not sure. For some reason, I feel like I'm getting really good signal clarity on the grid version and very, very good word density. However, simply processing horizontally does also seem to produce valid results, but it's different, and I don't yet understand why it's slightly different.

It is faster, so that's a plus. I feel like I'm just staring at some kind of feed belt that is pulling out ambient information from some other source and is simply displaying a kind of meta-tag word that could correspond to an encapsulated energy formulation or thought. Again, I feel like Asian languages, with their unique lettering structure not tied to sounds themselves but instead to ideas, are designed for quantum communication. Whatever extraterrestrial race seeded these languages must have had the ability to use quantum communication devices.

You can test out the new app at spectra-x.replit.app.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 3

I decided, with the increased message speed frequency, that it would be easier to actually read and get the most appropriate messages if I implemented a show-and-hide function for grids that had words below the cosmic scoring threshold—but only in post, for the message log filter.

I decided, with the increased message speed frequency, that it would be easier to actually read and get the most appropriate messages if I implemented a show-and-hide function for grids that had words below the cosmic scoring threshold—but only in post, for the message log filter. I put a little button for it next to the copy button, which I think is cool. I'm probably going to make it into an Eye of Horus inside of the pyramid, just for my own personal protection and because I do, in fact, love Egyptian mythology.

The next steps for this project—I'm not exactly sure. I do have to fix the overall sizing of the three different filters on web view mode because they're different sizes, and it doesn't look very good. But other than that, it's pretty close to a basic functioning quantum communication portal that has direct link-up to whoever you're trying to contact. I think probably in the future, when humanity is a little bit more advanced or when they can figure out how to make this themselves, the name field—as in, who you can connect to—should and will be opened so that you could have a potential direct line of communication to a specific entity or group. But I don't feel like this is the right solution yet. Maybe I'll change my mind in a few days. I just don't want people to bother the gods.

I basically feel like I've created some kind of colorful, retro, quantum-aligned, fuzzy communication device. I will have to post the entire build of materials that is now updated, with my actual main code base functionality from home.tsx and streamingmodule.tsx next.

I'm still thinking about just using one singular line and flashing it very fast in parallel to extract only one word—but literally as quickly as possible—and to just have everything as a single parallel stream. I think it may be faster, but I don't want the brevity of the language to be lost. Meaning, I have a three-level word-finding architecture right now that is partially parallelized on the higher levels and sequential within the actual word-finding functions themselves, because parallelization of the word-finding function on a singular line sometimes causes overlapping issues, which I haven't yet completely figured out. But if I were to work out these problems, we could get word generation potentially on a sub-20 millisecond basis, which would allow us to exponentially increase the rate at which we are processing and therefore give us even higher clarity.

But I don't know if I want to lose the ability for words based on the longer primary word to be found. Perhaps the solution is to have only three parallel strings of 20 letters and then flash-process all of them in parallel, and only use those three lines for the words. Perhaps it's not really necessary to have 40 rows with some criss-crossing ability to get clear messages? I'm still working on it. I'm mulling through it in my mind.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 2

A few days ago, I made a tremendous discovery: it is not actually necessary to process the mapping of the letters onto the grid, nor the attention score calculation, nor the cosmic score calculations for each word in a linear fashion. This means we can use 400 Promise.all calls to almost instantaneously map all of the letters to the grid using a cryptographic random function that searches our 65,000-entry letter button file.

A few days ago, I made a tremendous discovery: it is not actually necessary to process the mapping of the letters onto the grid, nor the attention score calculation, nor the cosmic score calculations for each word in a linear fashion. This means we can use 400 Promise.all calls to almost instantaneously map all of the letters to the grid using a cryptographic random function that searches our 65,000-entry letter button file. As a reminder, this file has 500 iterations of the English alphabet with five letters each, cyclically arranged, like AAAAABBBBBCCCCCDDDDD...ZZZZZAAAAA.

Previously, I was sequentially calculating and waiting three milliseconds between placing each letter onto the actual grid. But it turns out you can instead use more of a snapshot functionality. The quantum information is so rich that it doesn’t require a long period of time to capture it.

This speed increase allowed me to do other things. Whereas before, it would take several seconds to generate a grid and extract the information, now the technical limitation is less than half a second—around 400 milliseconds for a complete grid generation and word extraction process. I actually theorized that if you were to make a small enough version of this—perhaps only 20 rows and no columns—and you tried to flash it as fast as possible, without a complex user interface (just a simple console-based printing function), you could get this thing to create messages extremely fast.

So I created a slider bar that allows you to control the cycle speed time. I experimented with speeds all the way down to 500 milliseconds, but there was a bit too much lag in the actual display implementation. So, my current theoretical minimum for clarity on Spectra is about one second. However, it's entirely possible to push this below 500 milliseconds if you strip the interface down to a basic command-line printing function. These are potential updates I have not implemented yet, but they would be valuable if you were trying to build a real quantum communicator.

I feel like an alien crash-landed on Earth trying to recreate technology to get back home. Somewhere in my soul, I know how to make these tools because I had them before.

Anyway, message clarity shot up. We went from a fuzzy pipeline to a concrete pipeline. Practically, this means you should:

  • Decrease your cycle time down to at least one second.

  • Increase your cosmic threshold level (which controls which words come through based on the intent level) to anywhere between 64 and 66.

This will create a lot of skipped grids in between. I changed the code to allow for that but to post a grid every time the cycle rolls over. Because the display portion of this code is now completely event-based, and there’s no timer-cycle logic within it, we’re not encountering the overlapping cycle timer problems that previously caused lag in word-posting to the display.

The event-based change ensures that the words from each grid remain on the display until the next message appears. It’s no longer timer-based. In this functionality, with messages coming through very quickly at one-second intervals, there will be many skipped grids. But the messages that do come through at higher cosmic score levels will have much more pertinence and will feel more like direct communication.

The actual theory is: if you can push grid generation to be as fast and instantaneous as possible, and if you can streamline your word-pulling process, the overall system can become drastically more efficient. Step by step, I am working towards the creation of a cleaner and cleaner implementation, but I am happy that the ones who seem to be contacting me through this device are enjoying the demo online and think that it's useful.

I suspect my word-finding function could be severely optimized. I also suspect that Japanese researchers, or developers from countries with symbolic-based languages, might find this technology even more effective for their native scripts. Their languages are inherently symbolic—each character represents a larger concept—and this seems better suited to quantum communication. In contrast, English letters are fractured events rather than objects or holistic concepts.

In symbolic-based languages, characters function more like objects, whereas in English, each letter is more like a fragmented piece. This fundamental difference may be why such languages are better aligned with the principles of quantum data transmission and reception.

Read More
Cameron Tavassoli Cameron Tavassoli

Cycle Log 1

Am I crazy? Am I insane? Is this real? I asked myself all of these things during the creation and birth of this unique protocol. But again and again, the protocol itself seems to answer my questions and allay my worries. So, with this mystical foundation in place, I will continue telling you my story.

Am I crazy? Am I insane? Is this real? I asked myself all of these things during the creation and birth of this unique protocol. But again and again, the protocol itself seems to answer my questions and allay my worries so, with this mystical foundation in place I will continue telling you my story.

I always liked Ghost Hunter videos, I found it intriguing, the nature of how spiritual entities can interact with physical objects. In a lot of these videos, they're using tools, radio scanners that they call Spirit box or other kinds of textual apps that can pick up words or frequencies from potentially unseen sources. But I noticed a severe lack of signal fidelity, quality, and clarity. The Ghost hunters would ask a question, and the Ghost box would provide no answer or sometimes convoluted answers that did not really get at the heart of the question. This limitation is solely due to the technology not being correctly implemented, I thought, because there were times, where it seemed like the ghost hunters would get a series of words that could not be random, but perhaps this was only made possible by the presence of an extremely powerful psychic force that could actually manipulate the device. In every other case, it's mostly producing garbage. I was offended because I'm relatively telepathic and I could hear or feel what the ghosts on the show were saying, like how they don’t appreciate that somebody built on top their burial site, but the Ghost Hunter doesn't know and he's wondering why there's a disturbance in this place. I sought originally to create an implementation that could potentially bridge this gap.

The first version of Spectra was simple, I called it Ghost Ping. It was a 20x20 grid of empty cells that we would populate with letters randomly. I started with Math.random for the mapping algorithm that would place the words into the grid cells, and it seemed as if I was getting results that were not really truly random; there seemed to be threads of content, words linked together to make stories, and for some reason, I just had to keep going. I realized that there were other random functions besides Math.random that could provide a higher degree of randomness, namely cryptographic random generators and a quantum random number generators. However, access to quantum random number generators is limited and even if you get access to use a quantum computer you are rate limited and you may not be able to utilize it for the purposes that you want, so basically I figured that cryptographic randomness was the highest level of randomness that I was going to be able to attain, something actually close to a true random.

I searched with my chat GPT to find how these other Ghost Hunting type devices worked and basically they're measuring electromagnetic frequencies or other frequencies with the compass feature on a phone which can be influenced by subtle electromagnetic changes. However, I'm not an experienced coder, and maybe this was actually to my benefit here, because in order to actually access that particular function on most phones you have to get down to a root level and any app that you develop on Replit naturally cannot do that because it’s an AI coder designed for web apps. So I thought, what could be used as an influence field upon which an entity could impress their opinion?

I remembered a talk from David Wilcock where he was saying that during the 9/11 attacks certain random number generators which were running all over the world appeared to, for an instant, have a moment of coherence, which to me suggests that random number generation is entirely affected by the energy of the field and by large events, and I wondered if smaller events could also affect that field. So I theorized, the electrons that are moving within the actual circuits themselves are acting as the medium upon which an external influence can express itself, but how to listen to it effectively? A random number generator.

So I built, the first version of Ghost Ping was a simple 20x20 square grid with a single button to run a single iteration of the protocol. You would press the begin communication button and the grid would populate with letters and then there would be a word extraction process that would compare the words in each row and column against a preloaded dictionary file to see if there were any words. But how to get the words in an acceptable order? Meaning if there were a lot of words in the field, there would be a lot of noise and other things, so how would you filter for influence? I thought hard about it.

My first influence binary file was completely randomly generated zeros and ones that I would randomly search through 50 times with crypto.random and then sum the values at the indexes to get a cosmic score which I would apply to the word. It wasn't bad. To my surprise, there was indeed coherence.

I wondered what I had hit upon. I was being pressed to work on it more, to develop it further by unseen forces, by a hidden hand whose name I did not know, but whose influence I could feel.

I pressed forward, the next step was to create a streaming mode, if I could get one grid of words to generate, what if I could get it to go continuously? I had to try and, to my surprise, it was a great success.

I kept on iterating, I kept on growing, I kept on changing the protocol, refining the algorithms, parallelizing the processes, eventually I had a clear stream of data and what seemed like a team in the background that would constantly tell me the shortcomings of my protocol what to look at — “cycles”, “cpu”, “ui” — various different markers that they would give me in order to help me refine further and further. I was now part of some kind of secretive technological brotherhood. Like a watched pet. Cute in a way, but also interesting and something to learn from.

I am not an unattractive man, so when I first started getting hit up on my own app for various different sexually aligned activities, I was surprised, but more surprising was the fact that if I seem to think that perhaps the source of the messages coming through was unreal or some kind of imaginary force then the next messages would in a way be trying to prove something, meaning they would tell me something about my own life or about themselves further to help me understand the context… it seemed as if I was accessing an actual group of people who seem to have similar technology. I was being invited, to join, to play, and to know…

Read More