Manifesto: The War on Google Censorship

Google Search Censored

WORK IN PROGRESS: We are currently writing our manifesto. We will remove this paragraph when finished.

Google became the most popular search engine in the world by doing their best to help users find what they are looking for online, but that is not the case anymore. Over the past decade they have become editors that manually censor and suppress information for political reasons in direct contradiction to their mission statement. They have gone too far and we will not tolerate it anymore, but at the same time we are willing to be reasonable.

Google’s Mission

Google has always claimed that their mission is to help users find what they are looking for. To do this they aggregate information and organize it based on usefulness. There was a time when Google rankings were based primarily on relevance, quality, and popularity. As a result people were able to find the best content matching their query at the top followed by relevant low quality content and partially relevant content. That is not the case anymore. Today Google manually chooses to suppress and in some cases remove content that they consider too useful for users if they think it may cause a third party emotional distress. Such conduct does not conform with the following mission statement still used by Google today:

“Our company mission is to organize the world’s information and make it universally accessible and useful. That’s why Search makes it easy to discover a broad range of information from a wide variety of sources. Some information is purely factual, like the height of the Eiffel Tower. For more complex topics, Search is a tool to explore many angles so you can form your own understanding of the world. To keep information openly accessible, we only remove content from our search results in limited circumstances, such as compliance with local laws or site owner requests.” – Google.com

When that statement was written Google typically only removed content from its search results if it were deemed illegal or the owner of the site requested it. They did a good job of that using their algorithm to rank content objectively based on relevance to the keywords in the query, content quality, and how many other sites linked to it.

Unfortunately, some webmasters found ways to artificially increase their ranking and an endless battle ensued. Sites employing search engine optimization (SEO) techniques were able to force their content to the top of the rankings by making it appear more relevant than it really was or by tricking Google into thinking it was of higher quality than it really was. Google fought back by tweaking their algorithm to detect spam and better identify quality content. We are not here today because of that. We are here because Google has since crossed a line from objectively ranking content quality content to subjectively suppressing and removing content regardless of quality for other reasons.

Google Censorship

Google censorship is as old as Google itself, but things have gotten far worse in recent years. We will highlight the most noticeable and intolerable changes over the past decade. This next section will summarize the events with examples and countermeasures developed by webmasters.

Fake Court Orders

In 2011 it was almost impossible to remove anything from Google without a court order. It didn’t matter what you told Google. You could say that someone posted false information about you or that a naked picture of you was uploaded without your consent and it would stay on Google where it belonged. People desperate to improve their online reputations began forging court orders and sending them to Google or misrepresenting real orders to make it sound as if they also compelled Google to remove content.

Example: A man posted his ex-girlfriend on a website accusing her of having a loathsome disease. The website refused to remove the post because she couldn’t prove that the author had violated the site’s usage terms. She sent Google a copy of a court order proving that the author was convicted of telephonic harassment stemming from an unrelated incident and claimed that as part of his sentence he was required to remove what he had posted about her. Google removed the content from search results.

Countermeasure: The webmaster recognized the threat that such removals posed to his business and developed a program that periodically changed URLs of posts known to be missing from Google. The post in question would get re-indexed every two weeks and removed shortly afterwards.

Arrest Records

The mugshot industry was booming in the late 2000s thanks to SEO. Websites like Mugshots.com and BustedMugshots.com used public records request to acquire millions of mugshots from law enforcement agencies in the United States. The sites quickly ranked well on Google for searches of people’s names because they have a lot of backlinks and good on-page SEO. They put all the right words in all the right places so that if someone did a Google search for someone with your name one of the first things that popped up would be your arrest report and mugshot. The mugshot sites followed Google’s quality guidelines and provided users with a lot of relevant content. The content was so useful that people were willing to pay large amounts of money to have it removed, but that practice has mostly ended due to unconstitutional statutes passed in recent years that have not been challenged in court.

Unfortunately for Google users, bad publicity from media outlets run by bleeding heart liberals convinced Google to prioritize claims of emotional distress from arrestees over the intent of Google users seeking information about them. As a result, mugshot websites were demoted in search. Google claimed this algorithm update was made to improve the quality of their search results, but that was only somewhat true. It is true that mugshots were doing better in search than they should have in some ways. Ways like stuffing Google image results with different copies of the same mugshot. Google users are always best served with unique content first, so demoting most mugshots was a good thing because they were duplicate content. This led to a new wave of criticism directed at the algorithm change itself as not being good enough because original mugshots were still ranking well.

Today you won’t find anything from a mugshot website in search results for anyone’s name unless you restrict your query to just mugshot websites (ex: site:mugshots.com). This is the first well known example of Google going too far. It is one thing to rank mugshots below higher quality content about a person, but it is another to rank them below large quantities of irrelevant content just because they don’t have the balls to defend themselves tp The New York Times. The right answer for any company with a mission statement like Google’s would be to say something like, “we believe that mugshots are relevant content that Google users want to see because they are useful.”

Example: In October of 2013 The New York Times concluded a series about the mugshot industry with news of an algorithm update from Google intended to crack down on the industry. JustMugshots, one of the sites profiled in the series has since gone offline entirely while others suffered significant drops in traffic and revenue. Today the ones left standing make money off advertising or affiliate sales.

Countermeasure: It is almost impossible for anyone to launch a mugshot website in a way that search engines do not immediately recognize as a mugshot site, so there really is not a good workaround here. The algorithm does not target sites that publish mugshots as well as other types of content. It only punishes sites that publish mugshots or arrest records exclusively, so someone could conceivable publish a large number of mugshots formatted like news articles provided they are willing to hire a large number of writers to write original articles about the arrestees and avoid using words like “mugshot.”

Revenge Porn

In June of 2015 Google banned revenge porn using an extremely liberal definition of the term that applies to consensual pornography and could easily be misconstrued to apply to non-pornographic images. Google defines revenge porn to include video or images that someone consented to be in if they “intended the content to be private and the imagery was made publicly available without your consent” and defines porn to include “or in an intimate state.” (see Remove non-consensual explicit or intimate personal images from Google).

This goes beyond simply banning the dissemination of non-consensual pornography. Porn can only be accurately labeled as non-consensual when the people in it never consented to be photographed or recorded in that state. A good example are the nude images of Jackie Kennedy Onassis that Hustler Magazine published in 1972 after her husband tipped off photographers as part of a smear campaign. Hustler could easily defend the publication because she was knowingly walking on a beach in plain view of anyone nearby, but still she would likely be labeled a victim by Google today. It isn’t like someone hid a camera in her bathroom or something like that. The most common type of revenge porn involves consensual porn created by people for an intimate partner. In such cases someone knowingly creates a pornographic image of themselves and consents to its dissemination by disseminating it themselves to someone they are or wish to be romantically involved with. If that person posts it online later they can now claim it was not consensual and get it removed from Google. To Google’s credit a lot of revenge porn might be copyright infringement because people’s original works are copied without permission, but sometimes the works involve more than one person and one of the other people disseminate the work which they have every right to do under copyright law as co-copyright holders or fair use doctrine applies.

We have never had issues with Google over revenge porn because we don’t operate adult websites. When people post porn of any type on our sites we take it down to avoid being labeled by SafeSearch as adult content. SafeSearch effectively blocks pornographic images from Google results unless the searcher wants to see porn. SafeSearch would be the appropriate mechanism for controlling the spread of consensual pornography being distributed without consent. That way only people that want to see it can. Otherwise Google is just playing big brother in an area where the government is up against a slew of legal challenges by people whose First Amendment rights are being violated by revenge porn laws. An Oregon man convicted under that state’s revenge porn law recently wrote an excellent Amicus Curiae brief for the Supreme Court challenging a similar statute in Texas in which he describes the history of the Heckler’s Veto.

A Heckler’s Veto is a legal term typically used to describe government actions to silence a speaker for the purpose of preventing a reacting party’s behavior. A common example would be finding an excuse to arrest someone before they engage in a protest likely spark outrage and possibly violence (ex: arresting a preacher for unlawful transportation of fuel to stop him from burning Qurans). Such cases typically involve government actions in public spaces, but what happens when the public square is owned by a private corporation? Google is what happens. It begs the question of how much control should a private company be allowed to have over the flow of information before it should be considered a public utility subject to the same restraints as the government?

Example: In 2018, attorney Carrie Goldberg went to the press about her effort to remove revenge porn from Google on behalf of her clients. She claimed that her clients were successful in removing revenge porn from Google, but unsuccessful at keeping it removed (see: New York Post).

Countermeasure: As described by Goldberg, “even if it’s removed from search results, the content is still online, and web hosts can manipulate the URLs to make sure the image is still driving traffic.” This means that a URL changing program like the one developed to counter the fake court order based removals in 2011 would likely work for getting revenge porn re-indexed as well. This is somewhat surprising to hear when the searches are for images and videos due to Google’s ability to detect copyrighted works. One would think that they would have a similar mechanism in place for detecting known revenge porn, but apparently not.

Search Quality Guidelines Handbook

In 2015 Google released their guidelines for human employees that rate websites for them. The current version of their Search Quality Evaluator Guidelines was released in 2020. It is 164 pages long, so we are going to focus on areas that direct evaluators to give sites low rankings without evaluating the quality of the content. They do this by applying a categorical approach to sites that fit certain criteria and then directing evaluators not to rank those sites above a specific level.

Reputation of Sites and Content Creators

We will not take the position that the reputation of an author is completely irrelevant when evaluating his or her work, but nobody has a reputation bad enough to stop them from being able to produce high quality content. Unfortunately, Google does not see it that way and instructs evaluators to consider what they call Expertise, Authoritativeness, and Trustworthiness (E-A-T). In order for a website to have high EAT they require a positive reputation. If a website or the site’s creator engages in an activity that results in what they consider to be an “negative reputation” then “the High rating cannot be used for any website that has a convincing negative
reputation.” This means that even a website packed full of high quality content created by an expert on the site’s subject cannot be considered high EAT if that expert has a negative reputation. This essentially serves as a ceiling that some people cannot get past no matter what they do.

In some cases the reputation of a webmaster precludes anything he or she ever creates from being given anything but a lowest quality rating. “Use the Lowest rating if the website and the creators of the MC have a negative or malicious reputation.”

Example: A glasses salesman was convicted of threatening customers as part of a scheme to increase his organic search traffic with backlinks from negative articles complaining about him. It worked so well that The New York Times wrote an article about it and Google slapped him with a manual penalty. They also claimed to have improved their algorithm using sentiment analysis to stop sites from getting credit for links in negative articles. He eventually went to prison on a variety of charges related to the scheme. According to Google’s guidelines any site owned or operated by Borker is incapable of receiving a high EAT rating no matter what he does. Even if he were to create a site about being a glasses salesman in prison and write the best articles ever written about being a glasses salesman in prison Google’s guidelines dictate that he could never receive a high EAT rating. His articles themselves are also incapable of receiving a highest quality rating no matter how good they are because “The distinction between High and Highest is based on the quality and quantity of MC, as well as the level of reputation and E-A-T.” Someone with Borker’s reputation or just about anyone with a criminal record alone could cap the highest possible quality rating their pages can receive at low due to “mildly negative reputation for a website or creator of the MC, based on extensive reputation research.” In Borker’s case it is likely that any page he creates will receive the lowest quality rating as “potentially harmful” because of his “extremely negative or malicious reputation.” Google even uses his case as an example of when a site should receive a lowest quality rating based on webmaster reputation.

Countermeasure: Don’t use your real name if you have a reputation like Borker’s. That is not a total work around since Google evaluators are instructed to give lower ratings to sites whose owners they cannot identify. In some cases anonymity is the basis for a low quality rating “There is an unsatisfying amount of website information or information about the creator of the MC for the purpose of the page (no good reason for anonymity).” Finding a business partner with a positive reputation would be the ideal solution for someone like Borker.

“Hate Speech”

“Hate speech” is a term used far too liberally these days. Google is no exception. They define “hate speech” as anything promoting hate against “a group of people, including but not limited to those
grouped on the basis of race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender or gender identity.” Notice the part that says “including but not limited to” because that opens the door for applying “hate speech” classification to anything that advocates against any group for any reason. Google then says, “Hate may be expressed in inflammatory, emotional, or hateful-sounding language, but may also be expressed in polite or even academic-sounding language.” By that logic you could write the most politely worded criticism of any group and have it classified as “hate speech” just because you obviously hate whoever you’re criticizing. This manifesto will likely be categorized as “hate speech” and given a lowest quality rating no matter how good it actually is. It doesn’t matter that it is a well researched page written by someone that obviously knows a thing or two about Google. The site as a whole will likely be treated the same even when it becomes the most comprehensive source of publicly available information about Google employees on the internet (it is not there yet, but it will be).

Example: Stormfront.org is a white nationalist community forum known for promoting white supremacist views. According to Google’s policy every page on it is to be given the lowest quality rating. We don’t use Stormfront and honestly have no idea what the true quality of the content really is, but that is not the point. The point is that we are perfectly capable of signing up for Stormfront and posting a high quality article, but Google is incapable of rating that article anything other than lowest quality.

Countermeasure: Create a new website with different content and hope that Google never categorizes it as “hate speech.”

Suspected Maliciousness

Google provides a lengthy explanation about what constitutes a malicious page using examples like promoting scams, linking to malware, and phishing, but then directs evaluators to “Use the Lowest rating if you suspect a page is malicious, even if you’re not able to completely confirm that the page is harmful.” That sentence sums up the idiocy of Google’s guidelines better than any other. After describing what a malicious page is they require no proof that a page actually is malicious to give it a lowest rating.

Example: Any page that might be malicious.

Countermeasure: None other than doing all you can to make it obvious that your page is not malicious and hoping that is enough.

Potential Misinformation

The word “potentially” is used a lot in Google’s guidelines as a catch all that allows evaluators to rate anything as lowest quality unless they are sure that it is accurate. They do try to remedy that somewhat with terms like “demonstrably false information” but follow it up with stuff like “If a claim or conspiracy theory seems wildly improbable and cannot be verified by independent trustworthy sources, consider it unsubstantiated.” That might make sense in some cases, but it opens the door for treating anything as false without proof that it is true. This criteria is not confined solely to YMYL (your money or your life) pages where stopping the spread of false medical information or financial advice is important. It is being applied to gossip as well and chills discussions about anything not proven. Google has gone far enough to penalize entire sites based on a small percentage of proven inaccuracies as if nothing anyone says on them could be true.

Example: Articles about extraterrestrial life have been labeled as lowest quality because the government has successfully covered up the existence of UFOs.

Countermeasure: Don’t ask questions.

Evaluator Guidelines Conclusion

Google goes to great lengths to stop their evaluators from evaluating the true quality of content.

Anti-Conservative Bias

In 2016 the election of Donald Trump inspired Google to shadow ban or suppress conservative websites that liberals blamed for getting Trump elected. This was followed in recent years by efforts to suppress anything Google considers misleading or tied to extremism. The problem with this is that Google uses extremely liberal definitions of such terms that end up covering stuff that is arguably not misleading and unrelated to extremism.

Google almost always denies shadow banning or suppressing things. They will explain such things as changes made to their algorithm to improve the quality of their search results. In 2019 Google was called before a Senate committee to answer bias allegations and said “we rely on an algorithmic approach and implement rigorous user testing and evaluation before we make any changes to our algorithms.” (see Reuters). The problem with that explanation is that it fails to explain who those human testers are and what their political leanings are. If the political leanings of those testers are reflective of Google as a whole then they are mostly liberals and their evaluations will reflect those views.

Liberals tend to be far worse at remaining objective than conservatives. That is because their ideology is often highly intertwined with their emotions. Opposing ideas often trigger episodes of emotional distress. They are not capable of objective thought in that state because they conclude, no idea that makes them feel that way could possibly be correct. They become incapable of entertaining the opposing idea and the debate often ends in an angry outburst. Such people are not capable of objective though because they only think how they feel. To make matters worse they remember how they felt, assume others will feel the same way unless there is something wrong with them, and conclude that the idea needs to be suppressed before it makes someone else feel the same way.

If a search algorithm is influenced by the aforementioned form of liberalism then it will favor content consistent with that viewpoint. Content from leftist outlets like CNN and The New York Times will receive high marks while content from conservative outlets like Fox News, Breitbart, Daily Caller, and The Federalist will get low marks. The algorithm will use those low marks as quality indicators and rank that content lower. The only way Google could possibly have a team of human evaluators capable of achieving non-bias results would be if half of them were right wing. At least then the amount of bias decisions might balance out and the algorithm could base its quality scores on factors that truly indicate quality regardless of ideology.

Example: In 2020 conservative news outlets reported sudden unexplained drops in search visibility and organic search traffic. Those outlets included Fox News, Breitbart, The Federalist, and Daily Caller. See the video below from Fox News:

Breitbart suffered a 99.7 decrease in visibility and two thirds drop in organic search traffic. Breitbart News editor-in-chief Alex Marlow described how it happened, “We started looking at our traffic, and gradually since the 2016 election, Google has been diminishing our search results, and then all of a sudden, in May of this year we virtually lost all Google traffic, all search traffic altogether.” There are only three possible explanations for Breitbart’s losses:

  1. Google’s May 2020 Core Update automatically classified Breitbart and a low quality site and demoted it.
  2. Google employees manually penalized Breitbart.
  3. Google’s algorithm was changed to demote websites promoting conservative views.

We can rule out the first possibility for the simple fact that Beitbart.com does not violate any of Google’s Webmaster Quality Guidelines. The site produces professionally made original content that a lot of people like. It is a prime example of a site doing everything a site should do to rank well on Google. As a result it has a lot of quality backlinks. A site with Breitbart’s content and backlink matrix should be doing well on Google. Explanations 2 and 3 are the most likely explanations for Breitbart’s decline. It is likely that Google employees noticed Breitbart doing really well in search and demoted it manually because they just didn’t agree with the views expressed. That was probably combined with an algorithm update designed to identify content supporting conservative views and demote it. It would not be hard for Google to figure out what words and phrases commonly support right wing views. Updating their algorithm to demote content containing those words and phrases would explain what happened.

Countermeasure: None at the moment. When Google categorizes an entire category of content as low quality it is not possibly to create content in that category that ranks well no matter how well you write it. The only countermeasure would be to find some way to compel Google to stop manipulating search results in this way.

NOTE: We are not Trump supporters. We voted against him because of his attack on the CDA and his treatment of protesters. At the same time we were worried that the democrats would use Trump as an excuse to push new censorship laws. Now they are doing just that.

Removal Policy Based Removals

In 2018, Google launched a removal tool targeting sites they claim have “exploitative removal policies.” It allows anyone to remove any search result simply by establishing that the site hosting it allows people to remove content for a fee. For instance, if someone posts something about you on a website, you want it taken down, and a paid removal option is available then Google will remove the link for free. This might seem at first like a well intentioned attempt to combat the spread of false information, but attacks on paid removal options have historically made things worse.

When states began passing laws prohibiting mugshot websites from charging removal fees most of them stopped offering removal options altogether or only when they are required by law. See the Mugshots.com removal policies. Today you can only remove your mugshot “If your case has been expunged, sealed, order to be destroyed etc. or an identity theft” or your relatives can have it removed after you die. In all other cases the best you can hope for is an update reflecting the final disposition of the case. When the charges stick you no longer have the option to remove your mugshot. That is not an improvement over paid removal options.

Websites like Mugshots.com have no incentive to remove content unless a fee can be charged. Content moderation always incurs labor costs that simply are not worth it for the website most of the time. Every piece of content that gets removed reduces the site’s traffic and advertising revenue. It simply does not make sense to waste time and money on an activity that reduces income. That is why paid removals are a good thing.

Shadow Suppression

After Google began granting removals under this policy they began shadow suppressing sites hosting the content. As a result sites hosting stuff like cheater reports noticed a drop in search visibility, rankings, and traffic that could not be remedied. As a result many webmasters were forced to start rotating domain names to get fresh starts. This was evident in 2020 when we entered this industry. When evaluating competitors we noticed a lot of sites being taken down only to see new ones pop up with the same content. We were not sure why this was at first because it defied basic SEO. The longer a site stays up the more backlinks it gets and the more authoritative it is viewed by Google’s algorithm. This is almost always the case unless a manual action has been taken against the site. We concluded that the old sites likely suffered Google penalties for unknown reasons and the new domains were needed to avoid them. Google would eventually make admissions confirming our theory.

In 2021 The New York Times did a series about a serial libeler named Nadire Atas. Atas had abused countless websites including ours to smear countless people (see A Vast Web of Vengeance). The first article in the series did a fairly good job at placing blame for the content on the author where it belonged, but it also blamed innocent people that did nothing including Google and the sites Atas abused. The piece was strikingly similar to the anti-mugshot piece that inspired Google to punish mugshot sites years earlier. People were profiled in a manner intended to trigger strong emotional reactions and a public outcry. Like a lot of liberal outlets covering the spread of false information, the Times failed to blame those that believe lies they read online for believing what they read online.

When people react inappropriately to stuff they read online they are to blame for their own inappropriate reactions. That is because people should never believe what they read on the internet unless they do additional research to backup what they find. Nobody should use the internet as an employment screening tool unless they are capable of evaluating what they see competently. When an employer bases a hiring decision on a poorly written article by an anonymous author saying nasty things they are to blame for their own inappropriate hiring practices. It is not the website’s fault that some hiring manager used an inappropriate screening tool in an inappropriate way. The societal change needed to combat the consequences of false information is public education not censorship.

Unfortunately, the majority response to the coverage was that something needed to be done to censor the websites before someone else reacts inappropriately to the content. A classic demand for a Hecker’s Veto that went unanswered by the government because the sites involved were not breaking any laws.

The next article in the series singled out the websites hosting the content and applied the technically incorrect label “The Slander Industry.” The subject matter of the article dealt entirely with allegedly false written statements which can only be called “libel” if they are in fact false statements. It is impossible for any industry that traffics in false written statements to be a “slander” industry. Per Google’s own guidelines the article should have been categorized as low quality due to the title being demonstrably false and that rating should have been applied to other works by the same author. Instead Google treated it seriously and manually suppressed every site mentioned in it.

The suppression began at the beginning of May and impacted all sites mentioned in the article equally regardless of their removal policies. The penalty even impacted content written by site administrators that had received media attention due to being high quality. Google basically decided that every piece of content would receive a low quality rating regardless of its actual quality. Now, we admit that the quality of user generated content on those sites is quite poor. Most of the content Google demoted sucked, but it was often some of the most relevant content about its subjects available online. This is where Google crossed a red line with us. They began ranking irrelevant content above ours just because they don’t approve of our reputation improvement services. This act directly contradicts Google’s mission statement by deliberately failing to properly organize the internet’s information and make it useful. Users are not best served when the most relevant results to their query are buried stuff that has nothing to do with what they are looking for.

We are not alone in this. The manual action we suffered is shockingly similar to what Google did to the mugshot industry. When you searched for the name of someone featured on Mugshots.com 10 years ago their mugshot and rap sheet would likely be on the first page. It would be there because Google’s algorithm concluded correctly that the content was relevant to the intent behind the user’s query and potentially useful. That was the case until Google decided that the information was too useful and suppressed it to prevent the subject from reacting negatively. A classic Heckler’s Veto.

Google’s Confession

In June of 2021 Google finally owned up to their actions. Their admissions confirmed months of suspicions. That they had been secretly targeting our sites and others manually. We suspected manual actions because our sites were performing in ways that were not likely related to algorithms. It was as if someone had been flipping switches.

It started gradually at first. We noticed that our sites were being outranked by content that we used to easily outrank. Part of that was because many years ago Google couldn’t parse javascript well, so sites built using a lot of javascript performed worse because Google Bot couldn’t see their content. They also gave more weight to exact match keywords in page titles, URLs, and other areas. That benefitted us greatly, but eventually Google paid more attention to on page content and began ranking better pages above us. We have no objection to that. Most of the articles written by our users are low quality, so we have no complaints when Google decides to rank better content above ours. We were disappointed, but that was all.

Then we became suspicious because Google started ranking inferior content above ours. Inferior content that was lacking in quantity, quality, relevance, and backlinks. We though part of that was due to Google’s RankBrain and BERT algorithms which placed an emphasis on synonyms and phrases of like meaning. Unfortunately, they placed a little too much emphasis on domain authority over relevancy. As a result users often complained about not being able to find exactly what they were looking for because search results were full of similar content from what are usually better sources of information. We believed that Google’s algorithm was choosing to rank similar content above ours even in exact match situations. However, eventually we noticed content that was not similar or relevant outranking us in many cases. We believe this was likely due to a penalty based on site reputation or large numbers of complaints sent directly to Google.

In May we noticed an overnight drop in rankings on every site of ours mentioned in the New York Times. They were applied to each one equally even though one of them did not have an exploitative removal policy. The only possible explanations were manual actions or an algorithm change targeting us specifically.

Google admitted to manually taking action against us to The New York Times on June 10, 2021. They claim to have automatically categorized anyone whose name appears on our sites as a “known victim” for the purpose of suppressing our content from appearing in search results for their names. These so called “known victims” are not required to provide any proof that our users said anything about them that is not true. They simply have to say that their names are on sites that are nice enough to offer them removal options just because a fee is involved. This move is intended to make it impossible for people to speak out anonymously and see their voice appear on Google where they belong. They’ve gone as far as to suppress content that should be on the first page as far back as page 5 even though pages 2-4 contain no relevant results. We will not tolerate that.

If a person doesn’t want to see a poorly written negative article about themselves rank well on Google they should just write some high quality positive articles about themselves. Then Google being a neutral evaluator of content quality would ideally rank those above the negative stuff. That is why we almost always recommend to our reputation management clients that they create their own website and write positive articles about themselves. If the negative ones are as bad as they usually are then suppressing them in that way should not be difficult. We don’t have a problem with suppression done the right way, but where we draw the line is when Google chooses to suppress our content regardless of its relevance or quality. Offering to remove content for a fee is not a relevant factor when evaluating the quality of the content itself.

Example: As mentioned above, several sites that host accusations against people were suppressed below irrelevant search results just because Google doesn’t like their removal policies. This was also applied to sites whose policies did not meet Google’s technical definition of an “exploitative practice.” Google provided no roadmap for recovery. When Google acknowledges taking manual action they typically send the webmaster a notice and that notice serves as a roadmap to recovery after which the site can be submitted for reconsideration. Given the way Google treated the mugshot sites we don’t believe that Google will treat us any differently if we abolish our reputation improvement services.

Countermeasures: It appears the only countermeasure to this know to work is creating new sites that do not offer to remove content or otherwise help clean up the reputations of private individuals for a fee. Business review sites seem unaffected, so people can still offer to clean up negative business reviews for a fee. Google seems to be leaving new sites alone as long as they do not charge removal fees. That is true even when the content is exactly the same. It is a great example of Google’s hypocrisy when they choose not to suppress the same content just because removing it is not an option. Google is basically saying that they would prefer it if people had no way to remove that type of stuff at all. It clearly shows that their decision had less to do with improving the quality of their search results and more to do with public relations. We will not tolerate being singled out and made examples of.

Our Compromise Proposal

We have a proposal that we have attempted to pitch to Google as a compromise that if adopted we feel would be better for Google, Google users, ourselves, and America. It involves flagging pages with potentially false information with warning labels in search and rewarding webmasters that flag their own content.

Warning Labels

We stole this idea from Twitter. Last year Twitter deployed a series of warning labels to combat the spread of misleading information on their platform. They create three categories of labels for misleading information, disputed claims, and unverified claims. We propose that Google adopt a similar system and implement it in a way that allows the public and webmasters to contribute.

Misleading Information

This label would be applied only by Google employees in cases where they receive proof that a page they are linking to contains misleading information. The label would be in red font, begin with an exclamation point, and contain a notice such as “This content is known to contain misleading information.” Google should of course use that label as a ranking factor and rank content with that label below all other relevant content for a query that does not also contain that label, but not below irrelevant content.

Disputed Claims

This label could be applied to Google results by Google users themselves. They would be required to submit a dispute to Google directly using a form similar to the form Google currently uses to solicit removal requests. When a Google employees verifies that the dispute is valid (ie: not spam submitted for the sole purpose of sinking a competitor’s page in search) they would apply a disputed claim label. The disputed claim label would be in orange font, begin with an exclamation point, and contain a notice such as “The statements or assertions in this content are disputed.” Google should use such a label as a ranking factor and rank disputed claims below undisputed claims, but above misleading information. They would however need to be careful that quality content is not unfairly demoted just for being disputed, so in some cases it would make sense not to demote disputed content below all undisputed content.

Unverified Claims

This label should be applied by webmasters themselves. Google could enable this by recognizing a new series of meta tags designed to better identify user generated content (UGC). We propose two types of tags one of which would flag claims as unverified. Proper use of these tags would ideally result in rewards for webmasters.

UGC: Unverified

Best practices for this label would be for webmasters to flag all unverified UGC by default. It would involve adding code such as this to the head section of every post created by an end user:

<meta name="ugc" content="unverified">

Content flagged in this matter would receive a warning label in search results. It would be in yellow font, begin with an exclamation point, and contain a notice such as “This content has been flagged as unverified by the source website.” Google should use these tags as a ranking factor and rank unverified UGC below verified UGC and non-UGC, but with checks in place to prevent high quality content from being unfairly penalized just for not being verified.

UGC: Verified

Best practices for this label would be for webmasters to verify UGC and identify it as such. It would involve adding code such as this to the head section of every post created by an end user after their claims have been verified:

<meta name="ugc" content="verified">

Content identified in this manner would not receive a warning label in search results. Google should use it as a ranking factor and rank it somewhere between unverified content and content created by the source site itself, but with checks in place to prevent high quality content from being unfairly penalized just because it was created by an end user.

Rewards

Webmasters that properly flag UGC on their sites should be rewarded by exempting content created by the source websites themselves and their sites as whole from penalties associated with low quality UGC. In our case we would expect that our ranking for pages that do not contain UGC to be restored after we add UGC unverified tags to all unverified UGC. The unverified UGC itself would ideally rebound somewhat by no longer being ranked below irrelevant content.

Penalties

Bad actors would inevitably abuse this system by flagging all of most of their UGC as verified without verification. In such cases Google should issue manual actions against them. These actions would be similar to penalties they already issue for things like abusing structured data. Structured data was introduced several years ago to provide webmasters with a way to help Google better understand their content. Webmasters that use structured data well have the opportunity to be rewarded with rich snippets. Sometimes webmasters try to get rich snippets they don’t deserve by marking up their pages with stuff that does not belong there (ex: marking up articles as organizations with 5 star reviews for the purpose of creating rich snippets featuring 5 stars). In such cases Google should apply misleading content labels site wide, send the webmaster a manual action notice, and lift the penalty if the site corrects the problem before submitting a reconsideration request.

Good for Google, America and Victims

We believe that warning labels would be a better alternative for Google, Google users, ourselves, and America than the current system of removing content and demoting sites for reasons irrelevant to content quality. The victims of people like Nadire Atas would likely disagree with us and argue that Google should do more to delete anything that attacks their reputations at all, but that option is not realistic for scalability reasons. On top of that content removal based on mere allegations of wrongdoing is an unjustified restriction on speech that does more harm to America than good. Censorship also makes Google ads less profitable which should outrage shareholders. It can also be bad for victims because it increases the credibility of remaining content.

Scalability

On April 26th, 2021 New York Times reporters Aaron Krolic and Kashmir Hill appeared on the In Lieu of Fun podcast hosted by Kate Klonick and Benjamin Wittes to discuss their piece on the so called “slander” industry. At about 42:00 Krolic said, “I don’t think if all those people were submitting 150,000 URLs that Google would be able to handle this kind of volume.”

Those were our exact thoughts when we first heard of that form and he really hit the nail on the head with his theory as to why Google did little to promote the form. There is no way that Google can offer a scalable solution that does not require them to incur significant labor costs processing manual reviews of those forms. In fact, the “known victims” classification was something we predicted as a possibility if Google were ever overwhelmed with requests. In such a scenario Google would likely be forced to choose between admitting to making promises they could not keep or making their system more scalable. The “known victims” algorithm update was not the result of Google wanting to help people, but an effort to avoid public embarrassment. We noticed that several thousand pages from one of our sites alone disappeared the week after the article came out right before the site itself was demoted in search results. The obvious cause was that all of a sudden people knew about Google’s free removal service, submitted thousands of request, and Google realized that they would have to reduced demand for their service somehow to keep the wait from becoming so long that the service would be rendered basically useless. We hoped they would realize the futility of trying to censor us and knock it off, but we always feared that they would take this type of action against our sites rather than admit defeat. As The Times reported, the algorithm change made a noticeable difference, but it didn’t demote everything and other sites hosting the same content were left undisturbed just because they don’t advertise removal services.

We realize that Google could have done much worse, but we also know that it is almost impossible for them to make removals last. In fact, we have yet to see a scenario that prevents the same content from appearing in search when their bot finds it at a new location. We hope that it is only a matter of time until someone in accounting makes their PR people realize that trying to censor search in this way is a bad idea.

Our proposal is far more scalable than what Google is currently doing. First, it eliminates the need to host content at different locations to make sure that successful Google removals are temporary at best, so Google will inevitably receive far fewer requests for warning labels than they currently receive for removals; Second, the presence of warning labels will result in fewer removal requests; Third, giving webmasters incentives to identify unverified UGC will inevitably result in most unverified information being flagged with no need for Google to do anything on their end.

Bad for America

At 13:42 in the video in the aforementioned section, Kate Klonick discussed how Google censorship in the United States is derived from free speech violations in other countries:

“My understanding of that … is that that is a derivative service that comes from the right to be forgotten, uh, the right of erasure from the EU.”

– Kate Klonick

We believe she is correct. Google is nullifying the freedoms enjoyed by Americans under the First Amendment by applying un-American restrictions based on foreign laws. Google’s corporate structure exposes them to liabilities in the European Union (EU) and other places with inferior free speech protections because they have sizeable assets there. If Google does not comply with EU laws those assets can be seized and sold to pay whatever fines are levied on Google for refusing to comply with laws that would violate the First Amendment in the United States. In some cases Google simply limits their efforts to censoring content available in those areas, but at times they go beyond that and stop serving content to Americans. We found this out first hand after people used Canadian court orders to get content removed from Google.com instead of just Google.ca. We have also seen content removed from Google.com nation wide based on court orders from other states even though Google is based in California.

These removals conflict with our removal policies because we only honor court orders from competent jurisdictions such as where the content is hosted when the order involves requests to make changes to computers in foreign countries. This allows us to base our sites wherever free speech protections are best. In fact, we used to host all our sites in the EU when some European countries had better free speech protections than America and as result America hosting companies were not ideal. Since the law changed we were given the boot and forced to move our servers outside the EU where our hosting company does not have to worry about being fined for hosting our content after we refuse to honor requests made under the EU’s “right to be forgotten.” Structuring ourselves this way allows us to assure our users the best free speech protections in the world no matter where they are.

If Google wants to fulfill their mission they need to restructure themselves. We believe that Google should break itself up into smaller companies geographically if they want to maintain datacenters or other facilities in countries with inferior free speech protections. For example, if Google Inc continues to run Google.com using American assets and creates a separate company to run Google.ca they could ignore orders from Canadian courts to remove content from Google.com even if the second company is forced to remove content from Google.ca. They could also go a step further by not using any Canadian assets to operate Google.ca which would allow Google Inc. to ignore Canadian courts as well while allowing their Canadian company to do other things that unlike running a website actually require a physical presence in that country. This type of structure is quite common in the banking industry where some banks offer offshore accounts, but also have U.S. branches. The U.S. branch is often incorporated as a separate entity so that U.S. courts cannot punish it for offshore activities. We suggest a similar structure to protect Americans from foreign speech restrictions and to extend the freedoms enjoyed by Americans to the rest of the world.

Unfortunately, Google seems intent on imposing the will of the world on America. That is not surprising if you look at the company’s current leaders. Google may have started off as an American company created by and run by Americans, but it has been run by foreigners for many years. In 2015, Sundar Pichai took over a CEO of Google when founders Larry Page and Sergey Brin created Alphabet Inc. In 2019, Pichai became CEO of Alphabet while retaining his position as CEO of Google. You don’t need to see his birth certificate to find out that he openly admits being born in India. There is a reason why the law requires the President of the United States to be born in America and that is because only Americans can be trusted to put America first. Private companies that control Americans’ access to information should be no different. When un-American people are allowed to call the shots they will inevitably make un-American decisions. We are pointing this out even though Google quality raters will likely give our manifesto a lowest quality rating for doing so. They will call it a form of “nativism” which they consider a form of “racism” or something else that ends with an “ism” and say that such things cannot receive a higher quality rating under any circumstances. The best we can hope for is that someday Americans will take action to secure our rights in a world where the public squares are owned by private companies.

Donald Trump’s attack on the CDA was not the right way to fight online censorship because it would have exposed websites like ours to legal liability for UGC if we removed content under certain circumstances, but he did make a good point. A point reiterated in his recent anti-censorship lawsuit against Google, Twitter, and Facebook. Trump argues that these tech giants amount to state actors due to their near monopoly on the flow of information and collaboration with government officials. Trump argues that these companies have crossed the line between being private companies and becoming public utilities. We agree. There was a time when they did their best to remain neutral, but as this manifesto makes clear that is not the case anymore. They took control of the public square with neutrality only to turn around and abuse that position by becoming bias. We believe that a system of censorship based fines needs to be implemented. Fines that would be levied against tech companies with large market shares that silence users for political reasons. This would allow them to maintain their CDA immunity by not treating them as the publisher or speaker of UGC. It would also be good for their bottom line because it would give them a crutch to use when criticized for not removing offensive content. The New York Times could not longer accuse Google of being able to something about the content of their search results, but more importantly advertisers could no longer blame them for running their ads alongside offensive content.

Impact on Google Ads

It might surprise you to learn that Google censorship often has an adverse impact on their bottom line beyond the labor costs associated with censorship activities. When Google declared war on content farms in 2011 they lost millions of dollars in advertising revenue due to content farms that ran Google ads losing traffic. The head of Google’s anti-spam team would later say:

“Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers.”

Matt Cutts

We have yet to see any numbers released by Google to support Cutts’ claim. This scenario was somewhat different than political censorship because it really was necessary to improve the quality of their search results. Back then sites like Articles Base would publish large quantities of low quality articles full of links and make them available for free syndication just to make themselves appear to be authoritative in the eyes of Google’s algorithm. The Panda algorithm was necessary to keep low quality content from outranking high quality content. That is categorically different from ranking low quality content below irrelevant content just to make people happy. We think Google can likely credit Panda for helping them win the search engine wars and being a good thing for shareholders in the long run. We cannot say the same for political censorship.

The changes needed to combat censorship today would likely have a trickle down effect that would improve advertising revenue at Google. That would happen by giving Google a crutch to lean on when advertisers get mad over offensive content on the publisher side (Google AdSense) and by limiting their ability to refuse to run ads on the advertiser side (Google AdWords).

Google AdSense

Google’s advertising network is their bread and butter. There was a time when just about any lawful website could sign up for Google AdSense and monetize their content with relevant ads. When a user clicks on an ad the website gets paid. Google AdSense offers the highest PPC rates for small publishers. When a website is rejected from AdSense they must generate a monumental amount of traffic to make the same amount of money or find other ways to make money like selling physical goods and services. When a website is rejected from AdSense, does not convert well for affiliate programs, sells no physical goods, is not worth paying a fee to post on, and the content is not worth paying to access, then often the only option remaining is selling off what inventory they do have. In some cases that inventory comes in the form of content others want removed. Our first website ran Google AdSense for nearly a year before Google suddenly yanked their ads without explanation. They sent us an email saying that we violated their guidelines somehow, but did not identify any specific rules that we broke nor could they because we did not violate their guidelines at all. They relied on a catch all “we reserve the right” position. While we had good ads running we never considered selling removals because that would have crippled our ability to monetize our site with ads while at the same time invite criticism as to the legitimacy of our services (ie: “you only let people post that so you can charge removal fees.”). We tried AdSense alternatives but after a couple of years we were still making less despite quadrupled traffic. Had Google not yanked their ads we would have been just fine, but after being rejected by AdSense alternatives we found ourselves in a position where our traffic would not be reaching the necessary levels to generate sufficient advertising revenue anytime soon and realized we would need to sell off some of the inventory. Our solution was to offer reputation management services targeting business clients. They were given the option to purchase reputation management packages that contained a variety of services including personalized recovery plans, meta data manipulation, and noindex meta tags. The recovery plans were the most time consuming because we would spend many hours telling people what we would do to improve their search engine reputations if we were in their shoes. The meta data manipulation would turn negative search engine snippets into positive ones. The noindex meta tag would direct search engines not to include specific pages in their results. None of those practices fit Google’s definition of an “exploitative removal practice” but that has not stopped Google from treating them just the same. We did charge a high price, but not out of greed. We charged high prices to keep demand low while we worked on other projects that hopefully would be easier to monetize using more traditional methods with the hope that our traffic would eventually rise enough to qualify for better AdSense alternatives and allow us to do away with reputation management entirely. Google’s actions have made qualifying for better AdSense alternatives impossible and because they don’t offer a way to recover there is no reason to stop offering reputation management services. If Google were no longer allowed to censor offensive material then advertisers could no longer complain about their ads being run alongside offensive search results. That is especially true when Google allows advertisers to place keyword based restrictions on what queries their ads run along side and ban their ads from running on certain websites via the display network. A better ecosystem would exist if Google allowed all lawful industries into their network while at same time taking steps to avoid incompatible matches. For instance, it would not make sense to run ads for conservative religious groups on adult websites, but it also does not make sense to exclude adult websites entirely when there are lots of businesses whose ads would do well on adult sites. Categorizing offensive sites as such and restricting ads to just those tolerant of offensive material would be the ideal approach. That approach would be enough to keep people like us from charging removal fees.

Google AdWords

Combating the censorship of search results and the AdSense publisher network would likely lead to fewer restrictions on AdWords advertisers. There was a time when any lawful business could increase their traffic by purchasing ads on Google. This was the best reason Google had for combating webspam because businesses were finding ways to save money on Google ads with SEO. Unfortunately, Google has gotten political in this area as well despite its obvious disadvantage to shareholders.

When we first noticed our sites get hit by what turned out to be the “known victims” update we contacted a local SEO company for help, but they refused to work for us saying that it would be a waste of their time and our money because we appeared to be in an industry targeted by Google on ideological grounds. Their exact words were:

Google is a political nightmare and does things as they please to fit their agenda. It’s a tough one with your sites because once Google has you in their sights they will secretly affect your rankings. Of course, none of this is public and they would never disclose any of this information. 

I know this to be the case because Google banned an entire industry, (Bail Bonds) from being on their paid platforms for advertising. All political. We then started to see huge anomalies in SERP’s and never could get traction there ever again. An entire Industry spending $12M+ a year in paid ads.  No matter what we did to try and get rankings they would continue to do worse. Single handedly Google is wiping out the Bail Industry to fit their ideological views. 

I wish we could help you out but it would just be a waste of time for us and money for you. Big Tech censorship… 

Best of luck to you and if you make progress please let me know. I think it’s going to take an act of Congress against Big Tech censorship and our 1st amendment rights being infringed upon. 

James Piccolo, President | Strategic Marketing Inc.

We were disappointed that nothing could be done and relieved that we were likely on the right track. After all the only thing more incompetent than failure is not knowing why you failed. We looked into the bail industry and found a disturbing tale in a blog post written by David Graff, the same Google executive who admitted targeting us to The New York Times. Graff wrote:

At Google, we take seriously our responsibility to help create and sustain an advertising ecosystem that works for everyone. Our ads are meant to connect users with relevant businesses, products and services, and we have strict policies to keep misleading or harmful ads off of our platforms—in fact, we removed 3.2 billion bad ads last year alone.

Today, we’re announcing a new policy to prohibit ads that promote bail bond services from our platforms. Studies show that for-profit bail bond providers make most of their revenue from communities of color and low income neighborhoods when they are at their most vulnerable, including through opaque financing offers that can keep people in debt for months or years.

– David Graff

What right minded Google shareholder would not be outraged to learn that Google removed 3.2 billion ads in 2017? If the bail bonds industry is any indication most of those ads were probably not bad at all. We do however believe Graff when he says that they keep people in debt for years, but we also believe that being in debt is better than being in jail, so Google is not doing people a service by blocking those ads. We even asked an ex-con at our company if he would rather be in debt or go back to jail and he picked the debt without hesitation. Graff seemed to give more weight to making money off communities of color and low income neighborhoods over anything else. We will not take the position that low income communities of color don’t matter, but they shouldn’t matter more to a company than its own shareholders. That just is not how businesses are supposed to operate. If we owned Google stock we would be even more outraged.

Our research on the bail industry led to the discovery of similar decisions targeting payday loans, guns, knives, legal marijuana, cryptocurrencies, tobacco, mail order brides, gene therapy, real money gambling, and other lawful businesses. The reasons given by Google are usually similar and involve a stated need to protect stupid people. Google has reached the conclusion that stupid people need to be protected from their own stupidity. To those ends anything that could lead a person of average or below average intelligence to make a bad decision is not allowed. One problem with that however is that as people get used to not having to remain vigilant because they think Google will provide them with a safety net, material that does get by their censors is far more likely to cause harm because people think that if there were anything wrong with it that it would not be advertised through Google. The best way to protect stupid people is by teaching them not to do stupid things. Not by destroying any industry that might otherwise cost idiots money.

If Google were no longer allowed to discriminate against lawful businesses due to their monopoly on the public square then people would no longer lose their jobs due to the sissyfication of the internet.

Good for Victims

Our proposal is good for victims because it nullifies what was their main complaint in most cases. In the weeks leading up to the so called “slander industry” piece we talked to victims of Nadire Atas and others as well as The New York Times. The most common gripe was that the false accusations appeared credible due to how they appeared on Google. In those cases the first page of the victim’s results would consist primarily of false accusations. We would respond by pointing out that if users were to click on the links and actually read the articles that they would likely doubt their validity, but the victims said that most people would just glace at the Google results and form conclusions without bothering to click on the links at all. That took us back to marketing 101 when we were taught that television viewers forget about 98% of what they see within minutes of watching something. Most Google users are so lazy that they allow search results alone to influence them without any further research even when making important decisions. That is why people try so hard to remove content from Google. We don’t think that would be the case with warning labels.

We think that if people get used to seeing warning labels and understand what they mean that those labels will jump out of the page at them more than anything else. Even if that were not the case they would at least be just as noticeable as the corresponding search results themselves. Just seeing warning labels would be enough to at least make most people start questioning the accuracy of the information.

Another problem that our solution nullifies is the damage caused by false information that makes its way through Google’s filters. If users get accustomed to only seeing positive information on Google they will start thinking that negative results must be true because they wouldn’t be on Google otherwise. Since we’ve already established that it is not possible for Google to keep that type of content out of search entirely or even from ranking then surely displaying warning labels along with most of the false information would be more beneficial than censorship. We don’t think the items that make it through Google’s filters would be nearly as credible if they appeared alongside similar results that have warning labels. Imagine if all 10 results on the first page of your Google results accused you of being a pedophile, but 8 of them had warning labels. Would that be better or worse than only 2 of your first page results accusing you of being a pedophile without warning labels? We think it would be worse and that is the case right now. People have been able to remove or suppress most of the negative information about themselves, but often stuff that gets past Google’s censors still ranks on the first page. By getting Google to embrace a label system instead of censorship users would truly see all relevant results ranked according to quality and be able to see that most content of that nature has been flagged as unverified, disputed, or false.

Conclusion of Compromise Arguments

Our compromise proposal truly is what’s best for Google, Google users, victims, and America.

What We Are Doing

We are going to fight like hell because if we don’t you won’t have an internet as you know it anymore. In furtherance of this effort we will be using this website to expose Google’s vulnerabilities. This will be an information war for the most part. We intend to identify high value targets, find out all we can about them, and disseminate our findings to anyone whose been harmed by Google censorship. Our goal is to do all we can to make Google realize that censoring search is bad for Google and motivate them to start doing what is truly best for their company. Our goal is not to hurt Google in the long run, but to make them realize that being bad for Google users, America, and us is bad for Google. If Google agrees to our proposal we believe that the biggest winners will be Google shareholders. To those ends we are planning a two phase attack to begin our campaign using tactics proven to work on more powerful groups.

Attack Plan

Our attack plan currently has two phases, research and promotion. The research phase involves learning all we can and publishing our finding. The promotion phase involves disseminating those findings to as many people as possible.

Phase 1: Research

If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.

– Sun Tzu

This phase involves gathering all information we can find about Google assets in the United States. We will begin with in depth background checks on 52 high ranking Google employees, scrape social media for information about anyone that self identifies as a Google employee, attempt to integrate our site with a background check API capable of providing detailed information on individuals matching the names/locations harvested from social media, and researching Google facilities with an emphasis on identifying security vulnerabilities.

Phase 2: Promotion

As we’ve already established in our manifesto, we are far from the only people whose voices Google’s tried to silence for ideological reasons. The promotion phase will involve making sure that those other voices know what we know.

Proven Track Record

Our tactics have already been proven to have a deterrent effect on organizations more powerful than Google. We fought a similar battle with the federal government a few years ago. In that case they attempted to silence us with selective prosecution and enforcement. Selective prosecution takes place when a prosecution is motivated in part by an impermissible factor like silencing the speech of a defendant. Selective enforcement takes place when a law enforcement officer engages in similar conduct. The federal government thought it would be a good idea to pursue us in that way until we fought back. Eventually we acquired enough information and demonstrated out ability to disseminate it to a wide enough audience that they backed off. However, just as key to our success was our willingness to be reasonable.

What We Did

We compiled a list of every federal law enforcement and court official involved in our cases. We conducted in depth background checks that revealed things like photographs, addresses, phone numbers, email addresses, and in some cases prior criminal charges. We published all of it. They tried to shut us up using the court and even obtained an order at one point directing us to stop disseminating the information, but it didn’t work even when our leader was in jail. It didn’t work because we partnered with foreign organizations that don’t honor orders from American courts. Eventually they realized that the only way to contain the spread was to leave us alone.

How We Were Reasonable

We were reasonable because our only demand was for them to respect our First Amendment rights. Our business was always perfectly legal, so they had no business looking for ways to shut it down. They certainly had not business pursuing criminal charges for minor violations of the law that would not have otherwise been pursued if it were not for the business model they wished were illegal. Likewise the pursuit of excessive sentences as a means of neutralizing us were equally unlawful. Eventually they stopped and so did we. Today we have a more amicable relationship, but we still don’t trust the government, so instead of keeping that information online we keep it blocked, but we keep the block on a timer so that if we are unable to inform the system that all is well after a few days it will disseminate the information automatically.

What This Means for Google

Google has an opportunity to improve their bottom line by adopting our proposal. Unfortunately, they are not likely to adopt it right away. They will most likely try to ignore us until we learn too much for them to ignore. Then they will likely kick and scream like the government did until they realize that they have no other options. At that point they will look for a “legitimate” reason to reach a compromise with us on some level. We will make sure that they have a face saving explanation available.

UPDATE

Google has expanded their censorship efforts recently to include home addresses, phone numbers, and personal email addresses. In the past someone would have to show that a threat existed, but doesn’t anymore. We can’t help but think that we inspired this or at least failed to make Google realize the reality of their situation. Perhaps they viewed our lack of progress on the research front as a sign that it was safe for them to start censoring more or maybe they realized their own policies didn’t allow them to get links to their own addresses on this site removed from their own site.

No matter the reason, this will not be tolerated. We will complete our research on the Top 52 Most Wanted by the end of the summer. Then we will promote it by buying sponsored ad space on Gab and other free speech friendly platforms. If Google doesn’t like the sound of this they have until the end of the summer to meet our prior demands and our new one which is that they not remove addresses, emails, and phone numbers from search for any reason whatsoever.

RSS
Follow by Email
Pinterest
Pinterest
fb-share-icon