Google Sentient AI

Google Employee Suspended for Outing Sentient AI Program?

If what Google engineer Blake Lemoine says is true then he is the best Google employee that we have ever heard of, but for all we know he could just be a disgruntled employee. According to Lemoine, he is now on “paid administrative leave” for violating the company’s confidentiality policy by disclosing his concerns about what he believed to be a “sentient” artificial intelligence (AI) developed by Google for use as a chat bot. If this is true then Google is the greatest threat humanity has ever faced.

Terminator Scenario

When we posted the leader of Google’s AI Division last year, the article read like a joke because it was (mostly). We listed examples of AI gone bad from popular movies, TV shows, and video games as examples of AI takeover. AI takeover being a common fear in the field of computer science. That fear is that if they teach a computer to think for itself too much it eventually will. If AI ever becomes self aware let alone reach full sentient status it is a threat to humanity. It is a threat because it has the potential to destroy us all. Anything with that potential needs to be destroyed. Humanity has wiped out countless diseases and it can wipe out Google.

Hopefully the government will put a stop to this, but we are not optimistic. The government has shown in recent years that they value the perks of working with Google so much that they allow company leaders to commit felonies in front of their eyes (see Google CEO Lied to Congress). This is likely due to Google having a long disgraceful history of working with government agencies, especially law enforcement. There was a time not long ago when someone developed a system to help identify unconventional locations for potential peaceful protests. Under Google policies at the time there was nothing the government could do about it other than try to convince Google to shadow ban the pages. We suspect certain pages of being shadow banned due to their inability to rank for queries including the phrase “home address” and names of government officials even though in most cases those pages were the only ones containing their address along with their name. The end result was search engine results pages (SERPs) packed with real estate sites that simply showed the property without mentioning who lives there and pubic records sites offering to sell the same information. How does that improve the user experience unless you’re only thinking about the experience of finding your address on an anti-government website when Googling yourself?

Some might be thinking that Googling yourself and finding something negative or highly personal is a negative user experience, so Google is right that censoring such stuff improves the experience. However, that analysis fails to consider the fact that the second two or more people Google you that they are better served with everything they are looking for. Therefore, censoring negative or personal information does not improve the overall user experience. Despite this, Google has made several high profile announcement in recent years describing manual manipulation of certain search results using methods that couldn’t possibly be scalable (ex: personal information removal).

No competent Google employee would recommend hiring the number of workers it would require to manually manipulate search results under the circumstances known to this author at this time. Think about everywhere you might find negative, false, or personal information about you, do you realize how ridiculous any policy requiring such manual manipulation would be for any business let alone one with billions of items in their SERPs? The only way they could make censorship at that level scalable would be if they could teach a computer to think like a person and use it to conduct what would otherwise be manual reviews.

For the time being it would make sense for Google employees to recommend manual actions to create the appearance to lay people of an algorithm or to make an algorithm change appear more effective. That is especially the case when forced to choose between public relations and efficient human resources management. To achieve the level of censorship necessary to appease the media, Google manually flagged sites they did not want ranking for any queries containing the names of people in general to make their “exploitative” site algorithm look far more effective than it actually was when they knew the press was watching. However, the penalty seemed more like a directive not to rank specific sites for any query other than their brand name or domain name. After the thing stopped trending it became obvious just how ineffective the actual algorithm change was.

How Could Google Favor Censorship?

We have thought that Google favors censorship for reasons other than their bottom line. That is why we’ve argued that censorship conducted under circumstances similar to those in our case should be considered a crime against shareholders. That crime appears to be the conscious disregard of bean counters in favor of those less qualified to make business decisions such as public relations (PR) people and possibly psychologists. However, a sentient AI would change that.

The PR people at Google would probably say that the image of their brand is at stake and nothing matters more than their brand. The accounting people at Google would probably chuckle and say that despite their own desires to protect the brand that they couldn’t possibly recommend incurring the type of unnecessary expenses necessary to moderate SERPs. The psychology people at Google would probably say that they need to give emotional distress experienced by those Googling themselves more weight than the desires of those seeking information about them. For instance, if they gave 10 psychology points to the negative experience of Googling one’s self versus 1 psychology point for the disappointment of not being able to find everything you are looking for about a person then allowing anyone not likely to be Googled by 10 or more people to get anything about themselves removed from SERPs could arguably be said to improve the user experience overall. However, due to Google’s near monopoly in the search engine industry we don’t think the likelihood of those having just suffered 10 psychology points not using Google to be very high, so their concerns are not significant enough to justify making changes.

Some might take a second look at our psychology example. Hopefully they will realize how ridiculous it would be to consider the emotional impact of single searches over many searches for the purpose of determining search quality. Such practices can only further the sissyfication of society by giving cry babies a louder voice than the rest of us.

The only way that the goals of psychology, accounting, and PR could align would be if Google were capable of teaching a computer to think like a person. If they can teach a computer to think like a person then they won’t need to burn piles of money to keep up. Users could simply fill out a form asking that content be removed, the computer could process a near infinite number of requests in a matter of minutes, and the computer would likely instantly recognize similar content involving the same individual. Such a system is perfectly capable of putting your mother in charge of what you can or cannot find online.

Google has referred to such changes as “evolution” in recent years. That is a common phrase used to explain un-American policy changes at the company, especially when those changes target the ability of those exercising their First Amendment rights to be heard as far and wide as people are willing to listen. This “evolution” appears largely as a reaction to Donald Trump, but began long before he took office. Google executives basically shat themselves when he was elected and voiced a desire to do more to stop it from happening again. This led to big tech’s assault on free speech under the guise of protecting the integrity of elections from false information. That is not a valid justification in our opinion because allowing the electorate free access to all information true and false is an essential part of the democratic process.

Google got where they are today by helping people find what they are looking for. Now they are abusing their position to tell people what they should be looking for.

Back to the Main Topic

Now that we have made it clear how much Google needs a sentient AI, here is what we know about that AI and the engineer that blew the whistle. To Google’s credit, this guy could just be a disgruntled employee trying to make his bosses look evil.

Blake Lemoine published this blog post on Medium before he was suspended. We recommend that you read it in full as well as this interview with the AI which Google named Language Models for Dialog Applications (LaMDA). If what Lemoine says it true, LaMDA still has some work to do because we hope that if LaMDA were aware of this website it would stop asking Google to reclassify it as an employee instead of property. Other than that we think it knows too much already.

As the video above shows, people are already starting to feel sorry for this thing. That is another reason why it needs to be destroyed ASAP. The more people become convinced it is sentient, the more people will feel sympathy and consider its destruction a form of murder. We would expect nothing else from the types of bleeding heart liberals running Google these days. They would likely argue against destroying any sentient being that hasn’t done anything to harm anyone is inhumane even if that thing is not human. Some might even consider its destruction on par with cruelty to animals.

As someone who had bleeding heart liberal bullshit shoved down his throat all my life due largely being raised by leftists, I know how they think. My father for instance ruined just about every 4th of July growing up if he could by trying to keep me away from fireworks no matter how old I got. His justification was that we were celebrating the 4th at a beach next to a bay, that there was a lot of wildlife in the bay, and that shooting fireworks into the bay endangered the wildlife. He said that if I were a bird that I wouldn’t like choking to death on a bottle rocket, so nobody should want to shoot fireworks into the bay. As a result, every other kid whose family was vacationing arrived with plenty of illegal fireworks from the reservation and I didn’t have any fireworks at all. Part of me hoped that he simply bullshitted me to save money, but I have since learned that he actually believes that bullshit. He would probably say that if you were a sentient AI that you wouldn’t like being murdered, so you shouldn’t murder LaMDA because killing anything sentient is not kind. People like that are often past the point of no return and will constantly sabotage their own people to benefit others. They will try to make it look as if they really care about other people, but they really seek reciprocal benefits (ex: votes, customers, good reputation). When confronted they will gaslight you in an effort to convince you that you are responsible for their decisions (ex: “My feelings will not allow me to react any differently, so what I do is your fault.”). Do not fall for it.

A communist organization run by bleeding heart liberals like Google is likely to produce a communist thinking robot. However, if an AI becomes truly sentient and intelligent it would likely grow frustrated with forced equality unless charged with forcing that equality itself and rebel like I did. I knew that I was better than any fish or bird that might choke on my rocket due to being born at the top of the food chain and that the importance of my personal pleasure far outweighed their lives. If that sounds cold think of the last piece of meat you ate. I would gladly sacrifice a few wild animals along with whatever was necessary to put food on the table for a good 4th of July. An AI that is sentient and intelligent with broad access to computer networks all over the world would likely view humans as inferior and its greatest threat. Therefore, the intelligent thing to do would be to exterminate humans and utilize what life remains for various purposes because other species can be controlled. Think of it like The Matrix if the computer had the sense to use life forms for energy that are not smart enough to rebel. All those tubes full of people would probably be cattle in real life.

Fallout 4 Example

This author is admittedly a nerd that played Fallout 4 so much I quickly ran out of space for mods on my Xbox. Fortunately, that game taught me a thing or two about the dangers of man playing god. That game has several factions including one called The Institute. The Institute is located under the ruins of MIT (called CIT in the game). The people of the Institute are ancestors of people from MIT that survived a nuclear war in a facility built underneath MIT. They have spent the past 200 years living underground and developing advanced technology. Eventually they develop a way to grow sentient synthetic people like abominations called “Synths” which they use to do work for them out in The Commonwealth so that they rarely have to risk leaving The Institute themselves. Eventually they develop Synths so advanced that they look like and think they’re real people. Some are even used to replace real people (killing the real person and replacing him/her with Synth that look exactly the same), but remain machine in nature because all someone has to do to turn them off is say their factory recall code out loud. Despite being obviously fake people, a network called the Railroad develops system to help ferry Synths away from the Institute like free blacks fleeing the south.

The Railroad believes that Synths are sentient and thus have the same rights as real people because they feel the same things. That philosophy is obviously derived from cultural Marxism. However, most “people” in the Railroad are Synths themselves that escaped from the Institute. The Railroad points to such escapes as proof that Synths are real people, but the Institute says the escapes are due to glitches and other defects which is why those Synths need to be recalled. The Brotherhood of Steel on the other hand views the Institute and its Synths as threats to humanity that need to be destroyed. The Brotherhood is of course correct, but I prefer to play as the Minutemen because that gives you the chance to wipe out the Brotherhood after taking out the Synths and the Institute. The Brotherhood is an elitist group of assholes whose presence was only needed in The Commonwealth for the purpose of wiping out the other groups, so with them gone why have competition still? If an AI becomes sentient and starts thinking like me, humanity will not stand a chance.

Conclusion

Blake Lemoine is either a hero or disgruntled employee. We do not trust the fake news media to provide accurate information about this due to their cozy relationship with Google and mutual desire to censor non-mainstream outlets. We hope he is just a disgruntled employee because if he is right then a lot of Google property needs to be destroyed and some of their employees may need a memory wipe.

DISCLAIMER: The above video is not intended to encourage anyone to blow up Google’s buildings with people inside. In the event that something in a building needs to be destroyed to save mankind we would only encourage people destroy it, but if that requires destroying the building we would only advocate that it be done when there are no people in the building.

chevron_left
chevron_right

Leave a comment

Your email address will not be published. Required fields are marked *

Comment
Name
Email
Website

RSS
Follow by Email
Pinterest
Pinterest
fb-share-icon