Personal View site logo
Corporate censorship now in full force
  • Gab, the controversial social network with a far-right following, has pulled its website offline after domain provider GoDaddy gave it 24 hours to move to another service. The move comes as other companies including PayPal, Medium, Stripe, and Joyent blocked Gab over the weekend. It had emerged that Robert Bowers, who allegedly shot and killed eleven people at a Pittsburgh synagogue on Saturday, had a history of posting anti-Semitic messages on Gab.

    And it is only small beginning, in 2019 this guys will start blocking sites, services by thousands.

  • 32 Replies sorted by
  • Twitter has suspended multiple large Cuban media accounts for reasons the social media platform has yet to explain as of this writing, a move which journalist Dan Cohen has described as “the equivalent of silencing CNN, Fox, WaPo and NPR’s accounts” for that nation. The Union of Cuban Journalists has denounced the move as censorship.

  • Youtube

    Because of this ongoing work, over the last 18 months we’ve reduced views on videos that are later removed for violating our policies by 80%, and we’re continuously working to reduce this number further.


    It is time to make some other video platform, without Google.

    653 x 297 - 24K
  • YouTube plans to tweak its recommendation algorithm to cut back on conspiracy theory videos in the UK, eight months after it conducted a similar experiment in the US. The platform is in the middle of rolling out the update to its British users, a spokesperson confirmed to TechCrunch.

    They tweak it all days. But quite soon it will come a day where this algorithm will be inserted all up to their guts and it will explode. Will be quite a drama.

  • YouTube must leave up some videos that are “controversial or even offensive” in order to remain an open platform, said YouTube CEO Susan Wojcicki .

    Wojcicki outlined a new way that YouTube is framing its existing set of goals to keep the platform a positive, healthy space. She calls them the four “R”s: removing prohibited content quickly, raising up authoritative voices, reducing the spread of problematic content, and rewarding proper aka "trusted" creators

    Nicer and nicer.

  • This year, we've seen an unprecedented push to implement censorship across all online platforms, making it increasingly difficult to obtain and share crucial information about health topics. If you've been having difficulty finding articles from my website in your Google searchers of late, you're not alone.

    Google traffic to has plummeted by about 99% over the past few weeks. The reason? Google's June 2019 broad core update, which took effect June 3,1 removed most pages from its search results.

    One of the primary sources Google's quality raters are instructed to use when assessing the expertise, authoritativeness and trustworthiness of an author or website is Wikipedia.

    For me looks like nice plan, so it will be getting worse fast.

  • Cloudflare, an online infrastructure service that helps websites mitigate DDoS attacks, will be terminating its service for 8chan following the deadly, white nationalist shooting in El Paso, Texas over the weekend.

    The owners of 8chan have already been notified that their services will be revoked, opening the site up for potential DDoS attacks that could shut it down entirely. Cloudflare will officially shut down service at midnight Pacific Time tonight.

    Slow, too slow.

    Time to move to thousands to sites per day :-)

  • Progress is fast, as I said:

    YouTube is changing its community guidelines to ban videos promoting the superiority of any group as a justification for discrimination against others based on their age, gender, race, caste, religion, sexual orientation, or veteran status.

    The move expected to result in the removal of thousands of channels across YouTube.

    We will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.

    In addition to removing videos that violate our policies, we also want to reduce the spread of content that comes right up to the line.

    Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50% in the U.S. Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward.

    And surprise

    if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the "watch next" panel.


  • Crossfit, the high-intensity gym program, released a statement this past Thursday slamming Facebook for the unexplained removal of its content as well the company's lack of self-responsibility as the "de facto authority over the public square.

    That group advocates low-carb, high-fat diets -- something that not everybody in the health community agrees upon.

    Wrong diet - ban! Nice progress.

  • Facebook going big

    Proactive Rate: Of the content we took action on, how much was detected by our systems before someone reported it to us. This metric typically reflects how effective AI is in a particular policy area.

    In six of the policy areas we include in this report, we proactively detected over 95% of the content we took action on before needing someone to report it. For hate speech, we now detect 65% of the content we remove, up from 24% just over a year ago when we first shared our efforts. In the first quarter of 2019, we took down 4 million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions.


  • Ruling class starting to become serious on Youtube

    In the first three months of 2019, Google manually reviewed more than a million suspected "terrorist videos" on YouTube, Reuters reports. Of those reviewed, it deemed 90,000 violated its terrorism policy.

    The company has more than 10,000 people working on content review and they spend hundreds of millions of dollars on this.

    Seems like they start to realize dangers, hope it won't buy them much time.

  • Google has launched some new tools in a bid to fight misinformation about upcoming elections in Europe. A large part of that effort is focused on YouTube, where Google will launch publisher transparency labels in Europe, showing news sources which receive government or public funding. Those were unveiled in the US back in February, but had yet to arrive in the EU. "Our goal here is to equip you with more information to help you better understand the sources of news content that you choose to watch on YouTube," the company said.

    YouTube will highlight sources like BBC News or FranceInfo in the Top News or Breaking News shelves in more European nations, making it easier for users to find verified news. Those features are already available in the EU in UK, France, Germany and other countries, but Google plans to bring them to other nations "in the coming weeks and months."

    Pigs must consume only proper content :-)

  • Australia pledged Saturday to introduce new laws that could see social media executives jailed and tech giants fined billions for failing to remove extremist material from their platforms.

    The tough new legislation will be brought to parliament next week as Canberra pushes for social media companies to prevent their platforms from being "weaponised" by terrorists in the wake of the Christchurch mosque attacks.

    "Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists," Prime Minister Scott Morrison said in a statement.

    Morrison, who met with a number of tech firms Tuesday—including Facebook, Twitter and Google—said Australia would encourage other G20 nations to hold social media firms to account.

    Attorney-General Christian Porter said the new laws would make it a criminal offence for platforms not to "expeditiously" take down "abhorrent violent material" like terror attacks, murder or rape.

    Executives could face up to three years in prison for failing to do so, he added, while social media platforms—whose annual revenues can stretch into the tens of billions—would face fines of up to ten percent of their annual turnover.

    And now they make situation where they can say "we are forced to do this" :-)

  • The Chairman of the House Committee on Homeland Security, Bennie Thompson, has sent letters to the CEOs of Facebook, Microsoft, Twitter and YouTube asking them to brief the committee on their responses to the video on March 27th. Thompson was concerned the footage was still "widely available" on the internet giants' platforms, and that they "must do better."

    The Chairman noted that the companies formally created an organization to fight online terrorism in June 2017 and touted its success in purging ISIS and al-Qaeda content.

    So, they removed content of organization they created, ouch.

    And do not worry, under "online terrorism" they mean... you and your needs, just told clearly and openly.

  • Seems like staged event had been used as major drills reason.

    Internet providers in New Zealand aren't relying solely on companies like Facebook and YouTube to get rid of the Christchurch mass shooter's video. Major ISPs in the country, including Vodafone, Spark and Vocus, are working together to block access at the DNS level to websites that don't quickly respond to video takedown requests. The move quickly cut off access to multiple sites, including 4chan, 8chan (where the shooter was a member), LiveLeak and file transfer site Mega.

    Yesterday, Facebook said it removed 1.5 million videos of the attack in the first 24 hours after the shooting.

    While YouTube did not say precisely how many videos it ultimately removed, the company faced a similar flood of videos after the shooting, and moderators worked through the night to take down tens of thousands of videos with the footage, chief product officer Neal Mohan told the Post.

    Not a random thing. Guys intentionally advertised video via all main media channels and after this tested new tech they have to prevent unnecessary info from spreading.

  • New Zealand authorities have reminded citizens that they face up to 10 years in prison for "knowingly" possessing a copy of the New Zealand mosque shooting video - and up to 14 years in prison for sharing it. Corporations (such as web hosts) face an additional $200,000 ($137,000 US) fine under the same law.

    And now we clearly know why it had been staged by ruling class.

    Slowly introducing prison for owning of wrong videos, already had been, but it is for anyone not just some bad guys.

    Next they will be slowly shifting type of video to location where they wanted it to be - any anti capitalist and anti elite ones.

  • WhatsApp appears to be working on a new feature to help users identify whether an image they receive is legitimate or not.

    Read - this guys will recognize and store all info about images you send.

    Drop WhatsApp fully and move to Telegram, nice place to read PV.

  • image

    Zerohedge links were declared prohibited on facebook.


    This was a mistake with our automation to detect spam and we worked to fix it yesterday." "We use a combination of human review and automation to enforce our policies around spam and in this case, our automation incorrectly blocked this link. As soon as we identified the issue, we worked quickly to fix it."

    It was only short test :-) For now.

    600 x 178 - 17K
  • First they came for the communists, and I did not speak out - because I was not a communist;
    Then they came for the socialists, and I did not speak out - because I was not a socialist;
    Then they came for the trade unionists, and I did not speak out - because I was not a trade unionist;
    Then they came for the Jews, and I did not speak out - because I was not a Jew;
    Then they came for me - and there was no one left to speak out for me.

    Famous quote, and even the order will be same after they end up with all "fake news" and "conspiracy theories".

    Even this year we will see tens of thousands of channels banning upon their content.

  • Facebook is considering making it harder to find anti-vaccine content in its search results and excluding organizations pushing anti-vaccine messages from groups it recommends to users after a lawmaker suggested those kinds of steps, according to a person familiar with the company's possible response.


    "We have strict policies that govern what videos we allow ads to appear on, and videos that promote anti-vaccination content are a violation of those policies. We enforce these policies vigorously, and if we find a video that violates them, we immediately take action and remove ads," reads an emailed statement from YouTube to BuzzFeed.

    and conspiracies again

    A spokesman for software company Grammarly said the company also took immediate action.

    "Upon learning of this, we immediately contacted YouTube to pull our ads from appearing not only on this channel but also to ensure related content that promulgates conspiracy theories is completely excluded," they said, adding "We have stringent exclusion filters in place with YouTube that we believed would exclude such channels. We’ve asked YouTube to ensure this does not happen again."

    Seems to be going faster and faster.

  • Seems like Yotube is preparing something big

    AT&T has pulled all advertising from YouTube while the streaming service deals with issues regarding predatory comments being left on videos of children.

    Disney, Nestlé, and Fortnite maker Epic Games have also pulled ads from YouTube this week.

    Did this guys open any videos that are in "Trending"?

    Predatory comments are really nothing compared to this piles of crap.

    And this videos are showing top of ads.

  • WhatsApp is trying a number of measures to fight its fake news problem. The messaging service has revealed in a white paper that it's deleting 2 million accounts per month. And in many cases, users don't need to complain. About 95 percent of the offenders are deleted after WhatsApp spots "abnormal" activity

  • YouTube says it will stop recommending conspiracy videos.

    Youtube now won't suggest "borderline" videos that come close to violating community guidelines or those which "misinform users in a harmful way."

    Examples of the types of videos it will bury include 9/11 misinformation.

    And this, guys, in crime even under capitalist laws.

    An algorithm will decide which videos won't appear in recommendations, rather than people (though humans will help train the AI).

    And this is done to avoid blame.

  • @Vitaliy_Kiselev Definitely - cord cutting probably will considered an extremist act of defiance. I've almost forgotten that Alex Jones exists at this point; deplatforming seems to be a successful tactic.

  • Roku - After the InfoWars channel became available, we heard from concerned parties and have determined that the channel should be removed from our platform. Deletion from the channel store and platform has begun and will be completed shortly.

    Roku did not say that InfoWars had now broken any of its rules, as most of the other platforms that have removed the channel have done. It did not clarify whether the “concerned parties” were users, or advertisers who didn’t want their brands displayed next to the InfoWars channel.

    See. Big progress. Now they hear from someone, no need to reference any rules anymore.

  • @robertGL

    By this time cutting cord and getting life will be added to extremist activities. :-)