This is what the world of tech has been up to this week, week 36 – September 4, 2023 to September 10, 2023.
Joseph Mercola loses suit against Youtube
Joseph Mercola, a renowned anti-vaccine activist, lost his lawsuit against YouTube this week. Mercola had shared videos about Covid-19 vaccines.
He had argued that the platform owed him $75,000 in damages for banning his channel and removing his videos in 2021. He claimed that YouTube breached its user contract by denying him access to his content. The case was dismissed and Mercola cannot refile it.
According to US magistrate judge Laurel Beeler, YouTube had no obligation to host Mercola’s content. The platform deemed his content as violating community guidelines and acted accordingly. Beeler also stated that YouTube is not a storage site for user content, so there was no breach of contract.
Mercola had gained 300,000 subscribers on his YouTube channel since he launched it in 2005. He became an influential spreader of Covid-19 misinformation, making the list of the Disinformation Dozen. His lawsuit also argued that YouTube did not give him enough notice to remove his content. Judge Beeler barred all of Mercola’s claims under Section 230 of the Communications Decency Act.
U.S. Attorneys General raise concern about AI-generated child sex abuse material (CSAM)
Attorneys Generals from every U.S. state and territory sent a letter to Congress, outlining their concerns about AI- CSAM content that is generated by Artificial Intelligence.
The letter asks lawmakers to study how CSAM may potentially exploit children. It also asks for an expansion of existing CSAM laws to cover AI-generated materials. The Attorneys Generals expressed “a deep and grave concern” about child protections in their respective states. They worry that AI creates a new “frontier of abuse” with serious prosecution difficulties.
The alarm over AI-generated CSAM involves open source technologies like Stable Diffusion. These allow users to create AI pornography, as well as the tools and addons to improve the content quality. The technologies are often run locally, so there is nothing preventing users from creating CSAM. It must be stated that platforms like Dall-E, Adobe Firefly, and Midjourney have filters that prevent pornographic content.
In the letter, the Attorneys ask and answer a critical question: how can fake images cause any harm? They state that the potential risk to children and their families is real enough. Further, even unrealistic CSAM images may help to grow the child exploitation market. They can normalize child abuse and stoke abusers’ appetites.
The difficulty for lawmakers lies in restricting AI-generated CSAM without interfering with individual rights. In addition, Stable Diffusion models are widely used and cannot be outrightly banned.
The letter recommends the formation of an expert commission to investigate AI-generated CSAM on an ongoing basis. Their findings will then give prosecutors the right tools to uphold child protection laws.
In its call to action, the letter states that the issue of AI-generated CSAM is “a race against time” to protect children against the dangers of AI.
Google’s antitrust case begins September 12
The U.S. et al. v. Google case filed in 2020 will begin its hearing on Tuesday, September 12, 2023. The pivotal case is expected to last 10 weeks. It seeks to determine whether Google illegally gained competitive advantage. The key question is: Did Google pay companies like Apple to make its search engine the default setting on devices?
So far, Google’s stance is that its deals with Apple and other device manufacturers were not exclusive. Users are free to use alternative search engines like Bing, Mozilla, and DuckDuckGo. Whichever the outcome of this landmark case, it will set a precedent for tech industry giants. This is the biggest antitrust case since U.S. v. Microsoft in 1998.
There will be no jury for this trial. Judge Amit P. Mehta will preside, and Kenneth Dintzer will represent the government. John E. Schmidtlein of Williams & Connolly will represent Google. Executives from Google and other tech companies are expected to take the stand.
European Commission opens investigations into Bing, Edge, iMessage, and Microsoft Advertising
Under the new Digital Markets Act (DMA), the EU seeks to launch market investigations into the aforementioned companies. The investigations will determine if they are core platform services, i.e., providing essential features like search and instant messaging. The DMA states that tech companies offering these services are gatekeepers and must follow EU regulations to operate in the region.
At the heart of the DMA is interoperability. If certain core services are essential enough and have a significant usage in the EU, they must allow third party access. For example, Bing Search would need to offer results from Google Search and other engines to its users.
This strict new law came into force in November, 2022 and became applicable in May 2023. If the companies investigated meet DMA criteria, they will need to comply with EU regulations by March 2024.
So far, Apple and Microsoft have argued that iMessage and Bing (respectively) are not popular enough to fall under the DMA. However, they still offer core platform services that meet the DMA threshold. These include the Windows OS, iOS, and App Store. Failure to comply will attract fines of up to 10% of the companies’ global annual turnover, and more for repeated offenses.
Major car brands fail at consumer privacy
A report by Mozilla Foundation reveals that 25 car brands scored poorly on consumer privacy. These brands, including Tesla, Toyota, and Ford, collect personal data through apps in the car and on drivers’ devices.
This data helps manufacturers to tailor their vehicles, but they can also share or sell this data to third parties. The types of data include deeply personal information such as drivers’ immigration status and sexual activity. Some of the brands’ privacy policies outrightly state that they can share this data with law enforcement and data brokers.
The report is part of Mozilla’s Privacy Not Included (PNI) research. Launched in 2017, the PNI buyer’s guide advises consumers about privacy and security. It informs users about personal data that their connected devices collect and potentially sell.
In this recent report, PNI investigated car brands manufactured in five countries (USA, Germany, France, Japan, South Korea). Researchers read their privacy policies and tested their vehicle apps. The worst performer was Nissan, with Volkswagen, Toyota, Kia, and Mercedes Benz also scoring poorly on privacy protection. Renault was the least problematic as the brand must conform to stringent GDPR regulations.