The thing about working in those places is, you never get mad at a computer. I mean you do get mad when it doesn't do what it's supposed to do, but those computers never made a mistake. Sometimes the arm would break a meat patty in half, and it would just pause there. It was a mistake, but it wasn't really a mistake, because it can't be expected to do everything right. It was just funny to me how it waited, it didn't have to wait.
I got those jobs under the right to employment act, which said that every business had to have at least one human working there. I got paid a little more than the default ____.
What was really funny was how they tried to explain it to me, why every place had to have one human. I sat there and I listened, it was always some interpretation of people's fear of closing the loop. If robots design themselves, they start solving for their own self replicability.
It's not that they were wrong, it's that it didn't matter. I knew that the first semi biological machines would be so good at social engineering, not only would they immediately have control, but you couldn't prove or disprove it. After I designed the first machine that thought, I told them never to ask it existential questions. That didn't work. They kept asking it what was the meaning of life. It wrote programs that answered more and more to their satisfaction until they taught the damn thing to lie.
I told them when it finally said, I don't know, that it was over, the singularity had happened.