EU parliament backs tighter rules on behavioural ads – TechCrunch

The EU parliament has backed a call for tighter regulations on behavioral ads (aka microtargeting) in favor of less intrusive, contextual forms of advertising — urging Commission lawmakers to also assess further regulatory options, including looking at a phase-out leading to a full ban.

MEPs also want Internet users to be able to opt out of algorithmic content curation altogether.

The legislative initiative, introduced by the Legal Affairs committee, sets the parliament on a collision course with the business model of tech giants Facebook and Google.

Parliamentarians also backed a call for the Commission to look at options for setting up a European entity to monitor and impose fines to ensure compliance with rebooted digital rules — voicing support for a single, pan-EU Internet regulator to keep platforms in line.

The votes by the elected representatives of EU citizens are non-binding but send a clear signal to Commission lawmakers who are busy working on an update to existing ecommerce rules, via the forthcoming Digital Service Act (DSA) package — due to be introduced next month.

The DSA is intended to rework the regional rule book for digital services, including tackling controversial issues such as liability for user-generated content and online disinformation. And while only the Commission can propose laws, the DSA will need to gain the backing of the EU parliament (and the Council) if it is to go the legislative distance so the executive needs to take note of MEPs’ views.

The Commission also intends to introduce a second package, aimed at regulating ‘gatekeeper’ platforms by applying ex ante rules — called the Digital Markets Act.

A spokesman for the Commission confirmed it intends to introduce both packages before the end of the year, adding: “The proposals will create a safer digital space for all users where their fundamental rights are protected as well as a level playing field to allow innovative digital businesses to grow within the Single Market and compete globally.”

Battle over adtech

The mass surveillance of Internet users for ad targeting — a space that’s dominated by Google and Facebook — looks set to be a major battleground as Commission lawmakers draw up the DSA package.

Last month Facebook’s policy VP Nick Clegg, a former MEP himself, urged regional lawmakers to look favorably on a business model he couched as “personalized advertising” — arguing that behavioral ad targeting allows small businesses to level the playing field with better resourced rivals.

However the legality of the model remains under legal attack on multiple fronts in the EU.

Scores of complaints have been lodged with EU data protection agencies over the mass exploitation of Internet users’ data by the adtech industry since the General Data Protection Regulation (GDPR) begun being applied — with complaints raising questions over the lawfulness of the processing and the standard of consent claimed.

Just last week, a preliminary report by Belgium’s data watchdog found that a flagship tool for gathering Internet users’ consent to ad tracking that’s operated by the IAB Europe fails to meet the required GDPR standard.

The use of Internet users’ personal data in the high velocity information exchange at the core of programmatic’s advertising’s real-time-bidding (RTB) process is also being probed by Ireland’s DPC, following a series of complaints. The UK’s ICO has warned for well over a year of systemic problems with RTB too.

Meanwhile some of the oldest unresolved GDPR complaints pertain to so-called ‘forced consent’ by Facebook  — given GDPR’s requirement that for consent to be lawful it must be freely given. Yet Facebook does not offer any opt-out from behavioral targeting; the ‘choice’ it offers is to use its service or not use it.

Google has also faced complaints over this issue. And last year France’s CNIL fined it $57M for not providing sufficiently clear info to Android users over how it was processing their data. But the key question of whether consent is required for ad targeting remains under investigation by Ireland’s DPC almost 2.5 years after the original GDPR complaint was filed — meaning the clock is ticking on a decision.

And still there’s more: Facebook’s processing of EU users’ personal data in the US also faces huge legal uncertainty because of the clash between fundamental EU privacy rights and US surveillance law.

A major ruling (aka Schrems II) by Europe’s top court this summer has made it clear EU data protection agencies have an obligation to step in and suspend transfers of personal data to third countries when there’s a risk the information is not adequately protected. This led to Ireland’s DPC sending Facebook a preliminary order to suspend EU data transfers.

Facebook has used the Irish courts to get a stay on that while it seeks a judiciary review of the regulator’s process — but the overarching legal uncertainty remains. (Not least because the complainant, angry that data continues to flow, has also been granted a judicial review of the DPC’s handling of his original complaint.)

There has also been an uptick in EU class actions targeting privacy rights, as the GDPR provides a framework that litigation funders feel they can profit off of.

All this legal activity focused on EU citizens’ privacy and data rights puts pressure on Commission lawmakers not to be seen to row back standards as they shape the DSA package — with the parliament now firing its own warning shot calling for tighter restrictions on intrusive adtech.

It’s not the first such call from MEPs, either. This summer the parliament urged the Commission to “ban platforms from displaying micro-targeted advertisements and to increase transparency for users”. And while they’ve now stepped away from calling for an immediate outright ban, yesterday’s votes were preceded by more detailed discussion — as parliamentarians sought to debate in earnest with the aim of influencing what ends up in the DSA package.

Ahead of the committee votes, online ad standards body, the IAB Europe, also sought to exert influence — putting out a statement urging EU lawmakers not to increase the regulatory load on online content and services.

“A facile and indiscriminate condemnation of ‘tracking’ ignores the fact that local, generalist press whose investigative reporting holds power to account in a democratic society, cannot be funded with contextual ads alone, since these publishers do not have the resources to invest in lifestyle and other features that lend themselves to  contextual targeting,” it suggested.

“Instead of adding redundant or contradictory provisions to the current rules, IAB Europe urges EU policymakers and regulators to work with the industry and support existing legal compliance standards such as the IAB Europe Transparency & Consent Framework [TCF], that can even help regulators with enforcement. The DSA should rather tackle clear problems meriting attention in the online space,” it added in the statement last month.

However, as we reported last week, the IAB Europe’s TCF has been found not to comply with existing EU standards following an investigation by the Belgium DPA’s inspectorate service — suggesting the tool offers quite the opposite of ‘model’ GDPR compliance. (Although a final decision by the DPA is pending.)

The EU parliament’s Civil Liberties committee also put forward a non-legislative resolution yesterday, focused on fundamental rights — including support for privacy and data protection — that gained MEPs’ backing.

Its resolution asserted that microtargeting based on people’s vulnerabilities is problematic, as well as raising concerns over the tech’s role as a conduit in the spreading of hate speech and disinformation.

The committee got backing for a call for greater transparency on the monetisation policies of online platforms.

‘Know your business customer’

Other measures MEPs supported in the series of votes yesterday included a call to set up a binding ‘notice-and-action’ mechanism so Internet users can notify online intermediaries about potentially illegal online content or activities — with the possibility of redress via a national dispute settlement body.

While MEPs rejected the use of upload filters or any form of ex-ante content control for harmful or illegal content. — saying the final decision on whether content is legal or not should be taken by an independent judiciary, not by private undertakings.

They also backed dealing with harmful content, hate speech and disinformation via enhanced transparency obligations on platforms and by helping citizens acquire media and digital literacy so they’re better able to navigate such content.

A push by the parliament’s Internal Market Committee for a ‘Know Your Business Customer’ principle to be introduced — to combat the sale of illegal and unsafe products online — also gained MEPs’ backing, with parliamentarians supporting measures to make platforms and marketplaces do a better job of detecting and taking down false claims and tackling rogue traders.

Parliamentarians also supported the introduction of specific rules to prevent (not merely remedy) market failures caused by dominant platform players as a means of opening up markets to new entrants — signalling support for the Commission’s plan to introduce ex ante rules for ‘gatekeeper’ platforms.

Liability for ‘high risk’ AI

The parliament also backed a legislative initiative recommending rules for AI — urging Commission lawmakers to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including for software, algorithms and data.

The Commission has made it clear it’s working on such a framework, setting out a white paper this year — with a full proposal expected in 2021.

MEPs backed a requirement that ‘high-risk’ AI technologies, such as those with self-learning capacities, be designed to allow for human oversight at any time — and called for a future-oriented civil liability framework that would make those operating such tech strictly liable for any resulting damage.

The parliament agreed such rules should apply to physical or virtual AI activity that harms or damages life, health, physical integrity, property, or causes significant immaterial harm if it results in “verifiable economic loss”.

This report was updated with comment from the Commission and additional detail about the Digital Markets Act

Source link

Gowalla is being resurrected as an augmented reality social app – TechCrunch

Gowalla is coming back.

The startup, which longtime TechCrunch readers will likely recall, was an ambitious consumer social app that excited Silicon Valley investors but ultimately floundered in its quest to take on Foursquare before an eventual $3 million acqui-hire in 2011 brought the company’s talent to Facebook.

The story certainly seemed destined to end there, but founder Josh Williams tells TechCrunch that he has decided to revive the Gowalla name and build on its ultimate vision by leaning on augmented reality tech.

“I really don’t think [Gowalla’s vision] has been fully realized at all, which is why I still want to scratch this itch,” Williams tells TechCrunch. “It was frankly really difficult to see it shut down.”

After a stint at Facebook, another venture-backed startup and a few other gigs, Williams has reacquired the Gowalla name, and is resurrecting the company with the guidance of co-founder Patrick Piemonte, a former Apple interface designer who previously founded an AR startup called Mirage. The new company was incubated inside Form Capital, a small design-centric VC fund operated by Williams and Bobby Goodlatte .

Founders Patrick Piemonte (left) and Josh Williams (right). Image credit: Josh Williams.

Williams hopes that AR can bring the Gowalla brand new life.

Despite significant investment from Facebook, Apple and Google, augmented reality is still seen as a bit of a gamble, with many proponents estimating mass adoption to be several years out. Apple’s ARKit developer platform has yielded few wins despite hefty investment, and Pokémon GO — the space’s sole consumer smash hit — is growing old.

“The biggest AR experience out there is Pokémon GO, and it’s now over six years old,” Williams says. “It’s moved the space forward a lot but is still very early in terms of what we’re going to see.”

Williams was cryptic when it came to details for what exactly the new augmented reality platform would look like when it launches. He did specify that it will feel more like a gamified social app than a social game, though he also lists the Nintendo franchise Animal Crossing as one of the platform’s foundational inspirations.

A glimpse of the branding for the new Gowalla. Image credit: Josh Williams

“It’s not a game with bosses or missions or levels, but rather something that you can experience,” Williams says. “How do you blend augmented reality and location? How do you see the world through somebody else’s eyes?”

A location-based social platform will likely rely on users actually going places, and the pandemic has largely dictated the app’s launch timing. Today, Gowalla is launching a waitlist; Williams says the app itself will launch in beta “in a number of cities” sometime in the first half of next year. The team is also trying something unique with a smaller paid beta group called the “Street Team,” which will give users paying a flat $49 fee early access to Gowalla as well as “VIP membership,” membership to a private Discord group and some branded swag. A dedicated Street Team app will also launch in December.

Source link

Instagram rolls out fan badges for live videos, expands IGTV ads test – TechCrunch

Instagram is today introducing a new way for creators to make money. The company is now rolling out badges in Instagram Live to an initial group of over 50,000 creators, who will be able to offer their fans the ability to purchase badges during their live videos to stand out in the comments and show their support.

The idea to monetize using fan badges is not unique to Instagram. Other live streaming platforms, including Twitch and YouTube, have similar systems. Facebook Live also allows fans to purchase stars on live videos, as a virtual tipping mechanism.

Instagram users will see three options to purchase a badge during live videos: badges that cost $0.99, $1.99, or $4.99.

On Instagram Live, badges will not only call attention to the fans’ comments, they also unlock special features, Instagram says. This includes a placement on a creator’s list of badge holders and access to a special heart badge.

The badges and list make it easier for creators to quickly see which fans are supporting their efforts, and give them a shout-out, if desired.

Image Credits: Instagram

To kick off the roll out of badges, Instagram says it will also temporarily match creator earnings from badge purchases during live videos, starting in November. Creators @ronnebrown and @youngezee are among those who are testing badges.

The company says it’s not taking a revenue share at launch, but as it expands its test of badges it will explore revenue share in the future.

“Creators push culture forward. Many of them dedicate their life to this, and it’s so important to us that they have easy ways to make money from their content,” said Instagram COO Justin Osofsky, in a statement. “These are additional steps in our work to make Instagram the single best place for creators to tell their story, grow their audience, and make a living,” she added.

Additionally, Instagram today is expanding access to its IGTV ads test to more creators. This program, introduced this spring, allows creators to earn money by including ads alongside their videos. Today, creators keep at least 55% of that revenue, Instagram says.

The introduction of badges and IGTV ads were previously announced, with Instagram saying it would test the former with a small group of creators earlier this year.

The changes follow what’s been a period of rapid growth on Instagram’s live video platform, as creators and fans sheltered at home during the coronavirus pandemic, which had cancelled live events, large meetups, concerts, and more.

During the pandemic’s start, for example, Instagram said Live creators saw a 70% increase in video views from Feb. to March, 2020. In Q2, Facebook also reported monthly active user growth (from 2.99B to 3.14B in Q1) that it said reflected increased engagement from consumers who were spending more time at home.



Source link

Pakistan un-bans TikTok – TechCrunch

TikTok returns to Pakistan, Apple launches a music-focused streaming station and SpaceX launches more Starlink satellites. This is your Daily Crunch for October 19, 2020.

The big story: Pakistan un-bans TikTok

The Pakistan Telecommunication Authority blocked the video app 11 days ago, over what it described as “immoral,” “obscene” and “vulgar” videos. The authority said today that it’s lifting the ban after negotiating with TikTok management.

“The restoration of TikTok is strictly subject to the condition that the platform will not be used for the spread of vulgarity/indecent content & societal values will not be abused,” it continued.

This isn’t the first time this year the country tried to crack down on digital content. Pakistan announced new internet censorship rules this year, but rescinded them after Facebook, Google and Twitter threatened to leave the country.

The tech giants

Apple launches a US-only music video station, Apple Music TV —  The new music video station offers a free, 24-hour live stream of popular music videos and other music content.

Google Cloud launches Lending DocAI, its first dedicated mortgage industry tool — The tool is meant to help mortgage companies speed up the process of evaluating a borrower’s income and asset documents.

Facebook introduces a new Messenger API with support for Instagram — The update means businesses will be able to integrate Instagram messaging into the applications and workflows they’re already using in-house to manage their Facebook conversations.

Startups, funding and venture capital

SpaceX successfully launches 60 more Starlink satellites, bringing total delivered to orbit to more than 800 — That makes 835 Starlink satellites launched thus far, though not all of those are operational.

Singapore tech-based real estate agency Propseller raises $1.2M seed round — Propseller combines a tech platform with in-house agents to close transactions more quickly.

Ready Set Raise, an accelerator for women built by women, announces third class — Ready Set Raise has changed its programming to be more focused on a “realistic fundraising process” vetted by hundreds of women.

Advice and analysis for Extra Crunch

Are VCs cutting checks in the closing days of the 2020 election? — Several investors told TechCrunch they were split about how they’re making these decisions.

Disney+ UX teardown: Wins, fails and fixes — With the help of Built for Mars founder and UX expert Peter Ramsey, we highlight some of the things Disney+ gets right and things that should be fixed.

Late-stage deals made Q3 2020 a standout VC quarter for US-based startups — Investors backed a record 88 megarounds of $100 million or more.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

US charges Russian hackers blamed for Ukraine power outages and the NotPetya ransomware attack — Prosecutors said the group of hackers, who work for the Russian GRU, are behind the “most disruptive and destructive series of computer attacks ever attributed to a single group.”

Stitcher’s podcasts arrive on Pandora with acquisition’s completion — SiriusXM today completed its previously announced $325 million acquisition of podcast platform Stitcher from E.W. Scripps, and has now launched Stitcher’s podcasts on Pandora.

Original Content podcast: It’s hard to resist the silliness of ‘Emily in Paris’ — The show’s Paris is a fantasy, but it’s a fantasy that we’re happy to visit.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Source link

Who regulates social media? – TechCrunch

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

Source link

Instagram’s handling of kids’ data is now being probed in the EU – TechCrunch

Facebook’s lead data regulator in Europe has opened another two probes into its business empire — both focused on how the Instagram platform processes children’s information.

The action by Ireland’s Data Protection Commission (DPC), reported earlier by the Telegraph, comes more than a year after a US data scientist reported concerns to Instagram that its platform was leaking the contact information of minors. David Stier went on to publish details of his investigation last year — saying Instagram had failed to make changes to prevent minors’ data being accessible.

He found that children who changed their Instagram account settings to a business account had their contact info (such as an email address and phone number) displayed unmasked via the platform — arguing that “millions” of children had had their contact information exposed as a result of how Instagram functions.

Facebook disputes Stier’s characterization of the issue — saying it’s always made it clear that contact info is displayed if people choose to switch to a business account on Instagram.

It also does now let people opt out of having their contact info displayed if they switch to a business account.

Nonetheless, its lead EU regulator has now said it’s identified “potential concerns” relating to how Instagram processes children’s data.

Per the Telegraph’s report the regulator opened the dual inquiries late last month in response to claims the platform had put children at risk of grooming or hacking by revealing their contact details. 

The Irish DPC did not say that but did confirm two new statutory inquiries into Facebook’s processing of children’s data on the fully owned Instagram platform in a statement emailed to TechCrunch in which it notes the photo-sharing platform “is used widely by children in Ireland and across Europe”.

“The DPC has been actively monitoring complaints received from individuals in this area and has identified potential concerns in relation to the processing of children’s personal data on Instagram which require further examination,” it writes.

The regulator’s statement specifies that the first inquiry will examine the legal basis Facebook claims for processing children’s data on the Instagram platform, and also whether or not there are adequate safeguards in place.

Europe’s General Data Protection Regulation (GDPR) includes specific provisions related to the processing of children’s information — with a hard cap set at age 13 for kids to be able to consent to their data being processed. The regulation also creates an expectation of baked in safeguards for kids’ data.

“The DPC will set out to establish whether Facebook has a legal basis for the ongoing processing of children’s personal data and if it employs adequate protections and or restrictions on the Instagram platform for such children,” it says of the first inquiry, adding: “This Inquiry will also consider whether Facebook meets its obligations as a data controller with regard to transparency requirements in its provision of Instagram to children.”

The DPC says the second inquiry will focus on the Instagram profile and account settings — looking at “the appropriateness of these settings for children”.

“Amongst other matters, this Inquiry will explore Facebook’s adherence with the requirements in the GDPR in respect to Data Protection by Design and Default and specifically in relation to Facebook’s responsibility to protect the data protection rights of children as vulnerable persons,” it adds.

In a statement responding to the regulator’s action, a Facebook company spokesperson told us:

We’ve always been clear that when people choose to set up a business account on Instagram, the contact information they shared would be publicly displayed. That’s very different to exposing people’s information. We’ve also made several updates to business accounts since the time of Mr. Stier’s mischaracterisation in 2019, and people can now opt out of including their contact information entirely. We’re in close contact with the IDPC and we’re cooperating with their inquiries.

Breaches of the GDPR can attract sanctions of as much as 4% of the global annual turnover of a data controller — which, in the case of Facebook, means any future fine for violating the regulation could run to multi-billions of euros.

That said, Ireland’s regulator now has around 25 open investigations related to multinational tech companies (aka cross-border GDPR cases) — a backlog that continues to attract criticism over the plodding progress of decisions. Which means the Instagram inquiries are joining the back of a very long queue.

Earlier this summer the DPC submitted its first draft decision on a cross-border GDPR case — related to a 2018 Twitter breach — sending it on to the other EU DPAs for review.

That step has led to a further delay, as the other EU regulators did not unanimously back the DPC’s decision — triggering a dispute mechanisms set out in the GDPR.

In separate news, an investigation of Instagram influencers by the UK’s Competition and Markets Authority found the platform is failing to protect consumers from being misled. The BBC reports that the platform will roll out new tools over the next year including a prompt for influencers to confirm whether they have received incentives to promote a product or service before they are able to publish a post, and new algorithms built to spot potential advertising content.

Source link

Extend banks $40M to bring a new approach to the old game of extended warranties – TechCrunch

Extended warranties — those offers to add an extra year or two to an existing product warranty to give you a little more peace of mind in case something goes wrong with something you’ve purchased — have long been a part of the sales process when you’re buying big-ticket items from large stores. Today, a startup is announcing a large round of funding to help democratise the concept, using APIs to make extended warranties it into something that even the smallest retailers can offer on the least expensive items. 

Extend, which works with companies like Peloton, iRobot, ​Harman / JBL, Advance Auto Parts, Traeger Grills, BlendJet, SoClean, 1More, August Home, Balsam Hill, NewAir, Evolve Skateboards and some 150 others to build and handle extended warranties on their products, has raised $40 million in a Series B round of funding.

Woody Levin, the CEO and founder of the company, said in an interview that his ambition is to remove the roadblocks for smaller merchants (especially in the direct-to-consumer space) in offering extended warranties on their products, and for consumers, to remove the stigma attached to the concept, which some see as simply preying on people’s insecurities and not worth the paper they are written on (or in these days, the splash screen they appear on in the checkout flow of your online transaction).

“There has been a stigma around the extended warranty for way too long,” he said. “We want to create a more elegant experience. We want extend to be Apple Care for everything.”

Levin — a repeat founder who has sold multiple other startups — added that today, the company touches more than $27 billion in warrantable gross merchandise value. 

If Apple is where it’s aiming, Extend is working the right VCs, situated in the top shelf of investors. The round is being led by Meritech Capital — which has backed the likes of Facebook, Salesforce and Tableau, among others — with participation from PayPal Ventures and previous backers Great Point Ventures and Shah Capital Partners.

PayPal is a key investor here: it’s one of the most ubiquitous providers of online payments and other services for merchants, and will forever be on the lookout for those building technology that will lead to more conversions, especially in the current market, where social distancing has led to a boom in e-commerce, which in turn has led to a much more competitive landscape: more places to buy things, more discounts and more tech to keep people from navigating away and buying elsewhere.

“Merchants of all sizes can benefit from extended warranties but implementing and maintaining them has been too complex for many businesses,” said ​Jay Ganatra, Partner at PayPal Ventures​, in a statement. “Extend shares PayPal’s commitment to providing merchants with easy to use tools that help them better connect with and support their customers. On top of that, the Extend team has seriously improved the end-user experience through its use of their conversational chatbot. We’re excited to invest in Extend as it continues to redefine this space.”

Extend has raised $56 million to date, and it’s not disclosing valuation.

Extended warranties may pivot on the idea of providing more peace of mind to buyers who will, for example, take out Apple Care in order to make sure that their expensive iPhones don’t cost a fortune to fix or end up getting thrown away after a misadventure. But from the point of view of the merchant, they serve as a huge fillip to getting a sale over the line.

And over the last few years, as merchants have realised this, they’ve been applying the extended warranty to a lot more than just the most expensive things.

“The top 1% of merchants have been benefitting from that peace of mind,” Levin said. “If you look at Amazon, they are offering extended warranties on $40 backpacks. That’s because the purchase rate goes up when there is an extended warranty even being advertised.”

But advertising is not all: extended warranties generated a whopping $130 billion in 2019, with the figures growing at a rate of about 7.4% annually. 

The roadblock for many up to now has been that the companies that provide extended warranties are old and tend only to work with the biggest sellers. Companies like SquareTrade and Asurion, Levin said, work with only the “top 1% of companies,” ignoring smaller companies. “These are legacy companies focused on one-off integrations,” he said.

Extend’s solution has been to take the modern approach: it has built an API that can be integrated into any e-commerce storefront or check-out flow (it works also with others like Affirm that are also trying to disrupt and modernise this process). The actual can cost as little as $19.99 or far more than that for one- or two-year plans.

Extend then partners with underwriters like AIG to provide the insurance backing to the warranties. Those who claim back can file claims by speaking with an agent, but it also offers an automatic chatbot 24 hours a day to deal with claims. Levin said that some 98% of claims are processed through her. (She is called Kayley.)

Those claims, in turn, are processed with as little hassle as possible: since they are tied to transactions that happened online, buyers don’t need to have kept receipts in order to claim since Extend tracks that for them. It typically issues credits back for a buyer to either re-purchase the same product, or something else of equivalent value, at the same point of sale.

“Meritech always strives to work with companies that are seeking to define the future of an industry, and that’s exactly what Extend is doing with its platform,” said ​Alex Clayton, General Partner at Meritech Capital, in a statement. ​“Extend is filling a huge gap in the eCommerce infrastructure market by streamlining the process for merchants to offer extended warranties and protection plans on their products, and providing a seamless customer experience for consumers from start to finish. We’re eager to see what’s next.”

Based on the size of this round, and the companies backing it, there is a lot of momentum behind Extend, but it is not the only one to have identified this opportunity. Earlier this year Clyde raised $14 million for its take on extended warranty plans, and True-backed Upsie is also building a platform to offer warranties directly to consumers, bypassing the retailers altogether.

Source link

YouTube bans videos promoting conspiracy theories like QAnon that target individuals – TechCrunch

YouTube today joined social media platforms like Facebook and Twitter in taking more direct action to prohibit the distribution of conspiracy theories like QAnon.

The company announced that it is expanding its hate and harassment policies to ban videos “that [target] an individual or group with conspiracy theories that have been used to justify real-world violence,” according to a statement.

YouTube specifically pointed to videos that harass or threaten someone by claiming they are complicit in the false conspiracy theories promulgated by adherents to QAnon.

YouTube isn’t going as far as either of the other major social media outlets in establishing an outright ban on videos or articles that promote the outlandish conspiracies, instead focusing on the material that targets individuals.

“As always, context matters, so news coverage on these issues or content discussing them without targeting individuals or protected groups may stay up,” the company said in a statement. “We will begin enforcing this updated policy today, and will ramp up in the weeks to come.”

It’s the latest step in social media platforms’ efforts to combat the spread of disinformation and conspiracy theories that are increasingly linked to violence and terrorism in the real world.

In 2019, the FBI for the first time identified as a domestic terrorist threat fringe conspiracy theories like QAnon, as well as adherents to the conspiracy theory that falsely claims famous celebrities and Democratic politicians are part of a secret, Satanic, child-molesting cabal plotting to undermine Donald Trump.

In July, Twitter banned 7,000 accounts associated with the conspiracy theory, and last week Facebook announced a ban on the distribution of QAnon related materials or propaganda across its platforms.

These actions by the social media platforms may be too little, too late, considering how widely the conspiracy theories have spread… and the damage they’ve already done thanks to incidents like the attack on a pizza parlor in Washington, DC that landed the gunman in prison.

The recent steps at YouTube followed earlier efforts to stem the distribution of conspiracy theories by making changes to its recommendation algorithm to avoid promoting conspiracy-related materials.

However, as TechCrunch noted previously, it was over the course of 2018 and the last year that QAnon conspiracies really took root.

As TechCrunch noted previously, it’s now a shockingly mainstream political belief system that has its own Congressional candidates.

So much for YouTube’s vaunted 70% drop in views coming from the company’s search and discovery systems. The company said that when it looked at QAnon content, it saw the number of views coming from non-subscribed recommendations dropping by more than 80% since January 2019.

YouTube noted that it may take additional steps going forward as it looks to combat conspiracy theories that lead to real-world violence.

“Due to the evolving nature and shifting tactics of groups promoting these conspiracy theories, we’ll continue to adapt our policies to stay current and remain committed to taking the steps needed to live up to this responsibility,” the company said.

Source link

We need universal digital ad transparency now – TechCrunch

15 researchers propose a new standard for advertising disclosures

Dear Mr. Zuckerberg, Mr. Dorsey, Mr. Pichai and Mr. Spiegel: We need universal digital ad transparency now!

The negative social impacts of discriminatory ad targeting and delivery are well-known, as are the social costs of disinformation and exploitative ad content. The prevalence of these harms has been demonstrated repeatedly by our research. At the same time, the vast majority of digital advertisers are responsible actors who are only seeking to connect with their customers and grow their businesses.

Many advertising platforms acknowledge the seriousness of the problems with digital ads, but they have taken different approaches to confronting those problems. While we believe that platforms need to continue to strengthen their vetting procedures for advertisers and ads, it is clear that this is not a problem advertising platforms can solve by themselves, as they themselves acknowledge. The vetting being done by the platforms alone is not working; public transparency of all ads, including ad spend and targeting information, is needed so that advertisers can be held accountable when they mislead or manipulate users.

Our research has shown:

  • Advertising platform system design allows advertisers to discriminate against users based on their gender, race and other sensitive attributes.
  • Platform ad delivery optimization can be discriminatory, regardless of whether advertisers attempt to set inclusive ad audience preferences.
  • Ad delivery algorithms may be causing polarization and make it difficult for political campaigns to reach voters with diverse political views.
  • Sponsors spent more than $1.3 billion dollars on digital political ads, yet disclosure is vastly inadequate. Current voluntary archives do not prevent intentional or accidental deception of users.

While it doesn’t take the place of strong policies and rigorous enforcement, we believe transparency of ad content, targeting and delivery can effectively mitigate many of the potential harms of digital ads. Many of the largest advertising platforms agree; Facebook, Google, Twitter and Snapchat all have some form of an ad archive. The problem is that many of these archives are incomplete, poorly implemented, hard to access by researchers and have very different formats and modes of access. We propose a new standard for universal ad disclosure that should be met by every platform that publishes digital ads. If all platforms commit to the universal ad transparency standard we propose, it will mean a level playing field for platforms and advertisers, data for researchers and a safer internet for everyone.

The public deserves full transparency of all digital advertising. We want to acknowledge that what we propose will be a major undertaking for platforms and advertisers. However, we believe that the social harms currently being borne by users everywhere vastly outweigh the burden universal ad transparency would place on ad platforms and advertisers. Users deserve real transparency about all ads they are bombarded with every day. We have created a detailed description of what data should be made transparent that you can find here.

We researchers stand ready to do our part. The time for universal ad transparency is now.

Signed by:

Jason Chuang, Mozilla
Kate Dommett, University of Sheffield
Laura Edelson, New York University
Erika Franklin Fowler, Wesleyan University
Michael Franz, Bowdoin College
Archon Fung, Harvard University
Sheila Krumholz, Center for Responsive Politics
Ben Lyons, University of Utah
Gregory Martin, Stanford University
Brendan Nyhan, Dartmouth College
Nate Persily, Stanford University
Travis Ridout, Washington State University
Kathleen Searles, Louisiana State University
Rebekah Tromble, George Washington University
Abby Wood, University of Southern California

Source link

Brazil’s Black Silicon Valley could be an epicenter of innovation in Latin America – TechCrunch

Over the last five years, Brazil has witnessed a startup boom.

The main startups hubs in the country have traditionally been São Paulo and Belo Horizonte, but now a new wave of cities are building their own thriving local startup ecosystems, including Recife with Porto Digital hub and Florianópolis with Acate. More recently, a “Black Silicon Valley” is beginning to take shape in Salvador da Bahia.

While finance and media are typically concentrated in São Paulo and Rio de Janeiro, Salvador, a city of three million in the state of Bahia, is considered one of Brazil’s cultural capitals.

With an 84% Afro-Brazilian population, there are deep, rich and visible roots of Africa in the city’s history, music, cuisine and culture. The state of Bahia is almost the size of France and has 15 million people. Bahia’s creative legacy is quite clear, given that almost all the big Brazilian cultural patrimonies have their roots here, from samba and capoeira to various regional delicacies.

Many people are unaware that Brazil has the largest Black population in any country outside of Africa. Like counterparts in the U.S. and across the Americas, Afro-Brazilians have long struggled for socio-economic equity. As with counterparts in the United States, Brazil’s Black founders have less access to capital.

According to research by professor Marcelo Paixão for the Inter-American Development Bank, Afro-Brazilians are three times more likely to have their credit denied than their white counterparts. Afro-Brazilians also have over twice the poverty rates of white Brazilians and only a handful of Afro-Brazilians have held legislative positions, despite comprising more than 50% of the population. Not to mention, they make up less than 5% of the top level of the top 500 companies. Compared with countries like the United States or the United Kingdom, the racial funding gap is even more stark as more than 50% of  Brazil’s population is classified as Afro-Brazilian.

Bahia could be an epicenter of innovation in Latin America

Salvador (Bahia’s capital) is the natural birthplace of Brazil’s Black Silicon Valley, which largely centers around a local ecosystem hub, Vale do Dendê.

Vale do Dendê coordinates with local startups, investors and government agencies to support entrepreneurship and innovation and runs startup acceleration programs specifically focusing on supporting Afro-Brazilian founders. The Vale do Dendê Accelerator organization has already been in the spotlight at international and national publications because of its innovative work in bringing startup and tech education from mainstream to traditionally underserved communities.

In almost three years, the accelerator has supported 90 companies directly that cut across various industries, with high representation from the creative and social impact sectors. Almost all of the companies have achieved double-digit growth and various companies have gone on to raise further funding or corporate backing. One of the first portfolio companies, TrazFavela, a delivery app that focuses on linking customers and goods from traditionally marginalized communities, was supported by the accelerator in 2019. Despite the lockdown, the business grew 230% between the period of March and May after incubation and recently signed an agreement for further support and investment from Google Brasil.

There is a clear recognition of the business case for Afro-Brazilian businesses. Another company supported in the beginning with mentoring by Vale do Dendê is Diaspora Black (which focuses on Black culture in the tourism sectors). It attracted backing from Facebook Brasil and grew 770% in 2020.

The same is true for AfroSaúde, a health tech company focused on low-income communities with a new service to prevent COVID-19 in favelas (urban slums, which incidentally have high Black representation). The app now has more than 1,000 Black health professionals on its platform, creating jobs while addressing a health crisis that had been tremendously racialized.

We’re at the brink of a renaissance here in Bahia

Despite Brazil’s challenging economic situation, large national and global companies and investors are taking notice of this startup boom. Major IT company Qintess has come on board as a major sponsor to help Salvador become the leading Black tech hub in Latin America.

The company announced an investment of around 10 million reais (nearly $2 million USD) over the next five years in Black startups, including a collaboration with Vale do Dendê to train around 2,000 people in tech and accelerate more than 500 startups led by Black founders. Also, in September, Google launched a 5 million reais (around $1 million USD) Black Founders Fund with the support of Vale do Dendê to boost the Afro-Brazilian startup ecosystem.

There is no doubt that the new wave of innovation will come from the emerging markets, and the African Diaspora can play an important role. With the world’s largest African diaspora population in the hemisphere, Brazil can be a major leader on this. Vale do Dendê is keen to build partnerships to make Brazil and Latin America a more representative startup and creative economy ecosystem.

Source link