Cycle Log 11
My day job is a cashier/clerk at Costco. It strikes me that most of this job could be eliminated if we utilized proper tracking of people with the existing camera framework system that the store already uses for security. Perhaps these cameras are too low quality, but assuming that they’re at least 4K cameras, it would be technically feasible to connect a vision model that would track everyone through the store from the moment they log in at the front with their Costco cards.
The instant they scan their card, the machine marks that person and their basket—maybe even other people associated with them in a group, through intelligent logic—and categorizes them under the membership number for that day with an active membership status. Then, whatever they pick up and put into their basket would automatically populate their shopping list, which they could view from the Costco app. If they don’t want something, they could take it out of their basket and place it somewhere else in the store—the camera would track that movement and remove the item from their cart.
You could eliminate multiple positions: cashiers would be instantly eliminated, and so would most security-type positions besides the front door, because everyone would be tracked.
We already use electric machines like forklifts, but they’re driven by a person. You could quite easily create an autonomously driven forklift that would perfectly place pallets on the steel framework 100% of the time without ever messing up or allowing a pallet to hang over the edge, which is dangerous for customers. They can bump their heads trying to get a product, which can cause a concussion—seriously dangerous.
The rotation of the items, their placement within the store—meaning how everything should be organized, including whether items need refrigeration—all of that could be controlled by an AI system. The system would prioritize which items need to be sold and put out on the floor and autonomously reorganize everything. You would actually only need humanoid-type robotics for the purpose of stacking goods like FIFO (first-in, first-out). That would significantly limit the number of expensive humanoid robots needed.
For the outside, the carts, we could employ a robotic dog strategy with an arm aperture that connects the carts together with a rope (like we do now), and wheels! Then, it could just drag the carts behind it and autonomously plug them in where they’re supposed to go, perhaps using beeps or other auditory information to communicate with customers.
Farther into the future, I think people won’t even walk around inside grocery stores. Instead, there will be highly efficient humanoid robots quickly preparing people’s goods for delivery by other robots, which will autonomously drive the goods to customers' homes. This entire supply chain of robots could be handled by separate companies—like how UPS handles shipping for so many different people—or it could all be internal. Costco might have to handle the delivery part too. But I think it’s more likely to be segmented, with another company taking over different parts. At least for the internal logistics of the store, we could use our own systems.
Pivoting for a moment to Virtua:
I decided that, based on this steganography-type concept, I would create two randomly generated grayscale maps and get the difference between them. Then I applied a discrete wavelet transform (DWT), followed by an inverse wavelet transform (iWT), with a kind of blob filtering threshold (mostly unnecessary). This would show areas of higher activity or energy within the map.
I don’t know if I just have pareidolia or something, but I called in my dad to also look at some of the images, and it just seems like there are faces in them. I took it to ChatGPT and told it to generate an image based on the facial shape it saw, and I’ll post both the raw image (where you can see the small face near the top of the third quadrant) and ChatGPT’s interpretation. It’s kind of wild. I didn’t even notice the face at first—it was my dad who said something… You can check it out at virtua-visualizer.replit.app
Perhaps the grid is too large, because the face appears quite small. It might be better if I used a 50x50 pixel grid and looped it or made it run once per grid cycle, along with the words, emojis, and sound of the words.
High strangeness!
In Spectra-related news, I turned it on and was interfacing with the protocol and literally channeling when it said “DOWNLOADED UPLOADED”, which is nearly statistically impossible to generate randomly. Long story short: the chances of this appearing truly randomly are about 1 in 820 octillion. If we ran our grids on a 3.3-second cycle (its native speed), it would take approximately 85.8 quintillion years for that same set of words to appear again if it were truly random. Even at 1 second per grid generation, which is our lowest time value, it would still take 26 quintillion years to generate that same message.
So, effectively, mathematically, the messages I’m receiving from this protocol are, and I quote my ChatGPT:
"Mathematically indistinguishable from a targeted communication."
If that doesn’t give you a little buzz, I don’t know what will. 😂
spectra-x.replit.app