Facebook prototypes tabbed News Feed with Most Recent & Seen – TechCrunch

Facebook may make it easier to escape its ranking algorithm and explore the News Feed in different formats. Facebook has internally prototyped a tabbed version of the News Feed for mobile that includes that the standard Most Relevant feed, the existing Most Recent feed of reverse chronological posts that was previously buried as a sidebar bookmark, and an Already Seen feed of posts you’ve previously viewed feed that historically was only available on desktop via the largely unknown URL facebook.com/seen

The tabbed feed is currently unlaunched, but If Facebook officially rolls it, it could make the social network feel more dynamic and alive since it’d be easier to access Most Recent to view what’s happening in real time. It could also help users track down an important post they lost that they might want to learn from or comment on. The tabbed interface would be the biggest change to News Feed since 2013 when Facebook announced but later scrapped the launch of a multi-feed with side bar options for just exploring Music, Photos, Close Friends, and more.

The tabbed News Feed prototype was spotted in the Facebook for Android code by master reverse engineering specialist Jane Manchun Wong who’s provided tips on core of new features to TechCrunch in the past. She was able to generate these screenshots that show the tabs for Relevant, Recent, and Seen above the News Feed. Tapping these reveals a Sort Your News Feed configuration window where you can choose between the feeds, see descriptions from them, or dive into the existing News Feed preferences about who you block or see first.

CEO Mark Zuckerberg reveals the later-scrapped multi-feed

When asked by TechCrunch, a Facebook spokesperson confirmed this is something it’s considering testing externally, but it’s just internally available for now. It’s exploring whether the tabbed interface would make Most Recent and Seen easier to access. “You can already view your Facebook News Feed chronologically. We’re testing ways to make it easier to find, as well as sort by posts you’ve already seen” the spokesperson tells TechCrunch, and the company also tweeted.

Offering quicker ways to sort the feed could keep users scrolling longer. If they encounter a few boring posts chosen by the algorithm, want to see what friends are doing right now, or want to enjoy posts they already interacted with, a tabbed interface would give them an instant alternative beyond closing the app. While likely not the motive of this experiment, increasing time spent across these feeds could boost Facebook’s ad views at a time when it’s been hammered by Wall Street for slowing profit growth.

To many, Facebook’s algorithm can feel like an inscrutable black box that decides their content destiny. Feed it the wrong signals with pity Likes or guilty-pleasure video views and it can get confused about what you want. Facebook may finally deem us mature enough to have readily available controls over what we see.



Source link

Facebook asks for a moat of regulations it already meets – TechCrunch

It’s suspiciously convenient that Facebook already fulfills most of the regulatory requirements it’s asking governments to lay on the rest of the tech industry. Facebook CEO Mark Zuckerberg is in Brussels lobbying the European Union’s regulators as they form new laws to govern artificial intelligence, content moderation and more. But if they follow Facebook’s suggestions, they might reinforce the social network’s power rather than keep it in check by hamstringing companies with fewer resources.

We already saw this happen with GDPR. The idea was to strengthen privacy and weaken exploitative data collection that tech giants like Facebook and Google depend on for their business models. The result was that Facebook and Google actually gained or only slightly lost EU market share while all other adtech vendors got wrecked by the regulation, according to WhoTracksMe.

GDPR went into effect in May 2018, hurting other adtech vendors’ EU market share much worse than Google and Facebook. Image credit: WhoTracksMe

Tech giants like Facebook have the profits lawyers, lobbyists, engineers, designers, scale and steady cash flow to navigate regulatory changes. Unless new laws are squarely targeted at the abuses or dominance of these large companies, their collateral damage can loom large. Rather than spend time and money they don’t have in order to comply, some smaller competitors will fold, scale back or sell out.

But at least in the case of GDPR, everyone had to add new transparency and opt out features. If Facebook’s slate of requests goes through, it will sail forward largely unperturbed while rivals and upstarts scramble to get up to speed. I made this argument in March 2018 in my post “Regulation could protect Facebook, not punish it.” Then GDPR did exactly that.

Google gained market share and Facebook only lost a little in the EU following GDPR. Everyone else fared worse. Image via WhoTracksMe

That doesn’t mean these safeguards aren’t sensible for everyone to follow. But regulators need to consider what Facebook isn’t suggesting if it wants to address its scope and brazenness, and what timelines or penalties would be feasible for smaller players.

If we take a quick look at what Facebook is proposing, it becomes obvious that it’s self-servingly suggesting what it’s already accomplished:

  • User-friendly channels for reporting content – Every post and entity on Facebook can already be flagged by users with an explanation of why
  • External oversight of policies or enforcement – Facebook is finalizing its independent Oversight Board right now
  • Periodic public reporting of enforcement data – Facebook publishes a twice-yearly report about enforcement of its Community Standards
  • Publishing their content standards – Facebook publishes its standards and notes updates to them
  • Consulting with stakeholders when making significant changes – Facebook consults a Safety Advisory Board and will have its new Oversight Board
  • Creating a channel for users to appeal a company’s content removal decisions – Facebook’s Oversight Board will review content removal appeals
  • Incentives to meet specific targets such as keeping the prevalence of violating content below some agreed threshold – Facebook already touts how 99% of child nudity content and 80% of hate speech removed was detected proactively, and that it deletes 99% of ISIS and Al Qaeda content
gettyimages 961424292

Facebook CEO Mark Zuckerberg at the European Union headquarters in Brussels, May 22, 2018. (Photo credit: JOHN THYS/AFP/Getty Images)

Finally, Facebook asks that the rules for what content should be prohibited on the internet “recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context.” That’s a lot of leeway. Facebook already allows different content in different geographies to comply with local laws, lets Groups self-police themselves more than the News Feed and Zuckerberg has voiced support for customizable filters on objectionable content with defaults set by local majorities.

“…Can be enforced at scale” is a last push for laws that wouldn’t require tons of human moderators to enforce what might further drag down Facebook’s share price. “100 billion piece of content come in per day, so don’t make us look at it all.” Investments in safety for elections, content, and cybersecurity already dragged Facebook’s profits down from growth of 61% year-over-year in 2019 to just 7% in 2019.

To be clear, it’s great that Facebook is doing any of this already. Little is formally required. If the company was as evil as some make it out to be, it wouldn’t be doing any of this.

Then again, Facebook earned $18 billion in profit in 2019 off our data while repeatedly proving it hasn’t adequately protected it. The $5 billion fine and settlement with the FTC where Facebook has pledged to build more around privacy and transparency shows it’s still playing catch-up given its role as a ubiquitous communications utility.

There’s plenty more for EU and hopefully U.S. regulators to investigate. Should Facebook pay a tax on the use of AI? How does it treat and pay its human content moderators? Would requiring users to be allowed to export their interoperable friends list promote much-needed competition in social networking that could let the market compel Facebook to act better?

As the EU internal market commissioner Thierry Breton told reporters following Zuckerberg’s meetings with regulators, “It’s not for us to adapt to those companies, but for them to adapt to us.”



Source link

Facebook pushes EU for dilute and fuzzy internet content rules – TechCrunch

Facebook founder Mark Zuckerberg is in Europe this week — attending a security conference in Germany over the weekend, where he spoke about the kind of regulation he’d like applied to his platform, ahead of a slate of planned meetings with digital heavyweights at the European Commission.

“I do think that there should be regulation on harmful content,” said Zuckerberg during a Q&A session at the Munich Security Conference, per Reuters, making a pitch for bespoke regulation.

He went on to suggest “there’s a question about which framework you use,” telling delegates: “Right now there are two frameworks that I think people have for existing industries — there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you,’ but you’re not going to hold a telco responsible if someone says something harmful on a phone line.”

“I actually think where we should be is somewhere in between,” he added, making his plea for internet platforms to be a special case.

At the conference he also said Facebook now employs 35,000 people to review content on its platform and implement security measures — including suspending around 1 million fake accounts per day, a stat he professed himself “proud” of.

The Facebook chief is due to meet with key commissioners covering the digital sphere this week, including competition chief and digital EVP Margrethe Vestager, internal market commissioner Thierry Breton and Věra Jourová, who is leading policymaking around online disinformation.

The timing of his trip is clearly linked to digital policymaking in Brussels — with the Commission due to set out its thinking around the regulation of artificial intelligence this week. (A leaked draft last month suggested policymakers are eyeing risk-based rules to wrap around AI.)

More widely, the Commission is wrestling with how to respond to a range of problematic online content — from terrorism to disinformation and election interference — which also puts Facebook’s 2 billion+ social media empire squarely in regulators’ sights.

Another policymaking plan — a forthcoming Digital Service Act (DSA) — is slated to upgrade liability rules around internet platforms.

The details of the DSA have yet to be publicly laid out, but any move to rethink platform liabilities could present a disruptive risk for a content-distributing giant such as Facebook.

Going into meetings with key commissioners Zuckerberg made his preference for being considered a “special” case clear — saying he wants his platform to be regulated not like the media businesses which his empire has financially disrupted; nor like a dumbpipe telco.

On the latter it’s clear — even to Facebook — that the days of Zuckerberg being able to trot out his erstwhile mantra that “we’re just a technology platform,” and wash his hands of tricky content stuff, are long gone.

Russia’s 2016 foray into digital campaigning in the U.S. elections and sundry content horrors/scandals before and since have put paid to that — from nation state-backed fake news campaigns to live-streamed suicides and mass murder.

Facebook has been forced to increase its investment in content moderation. Meanwhile, it announced a News section launch last year — saying it would hand-pick publishers’ content to show in a dedicated tab.

The “we’re just a platform” line hasn’t been working for years. And EU policymakers are preparing to do something about that.

With regulation looming, Facebook is now directing its lobbying energies into trying to shape a policymaking debate — calling for what it dubs “the ‘right’ regulation.”

Here the Facebook chief looks to be applying a similar playbook as Google’s CEO, Sundar Pichai — who recently tripped to Brussels to push for AI rules so dilute they’d act as a tech enabler.

In a blog post published today Facebook pulls its latest policy lever: putting out a white paper which poses a series of questions intended to frame the debate at a key moment of public discussion around digital policymaking.

Top of this list is a push to foreground focus on free speech, with Facebook questioning “how can content regulation best achieve the goal of reducing harmful speech while preserving free expression?” — before suggesting more of the same: (Free, to its business) user-generated policing of its platform.

Another suggestion it sets out which aligns with existing Facebook moves to steer regulation in a direction it’s comfortable with is for an appeals channel to be created for users to appeal content removal or non-removal. Which of course entirely aligns with a content decision review body Facebook is in the process of setting up — but which is not in fact independent of Facebook.

Facebook is also lobbying in the white paper to be able to throw platform levers to meet a threshold of “acceptable vileness” — i.e. it wants a proportion of law-violating content to be sanctioned by regulators — with the tech giant suggesting: “Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.”

It’s also pushing for the fuzziest and most dilute definition of “harmful content” possible. On this Facebook argues that existing (national) speech laws — such as, presumably, Germany’s Network Enforcement Act (aka the NetzDG law) which already covers online hate speech in that market — should not apply to Internet content platforms, as it claims moderating this type of content is “fundamentally different.”

“Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context,” it writes — lobbying for maximum possible leeway to be baked into the coming rules.

“The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms,” Facebook’s VP of content policy, Monika Bickert, also writes in the blog.

“If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation,” she adds, ticking off more of the tech giant’s usual talking points at the point policymakers start discussing putting hard limits on its ad business.

Source link

Facebook will pay Reuters to fact-check Deepfakes and more – TechCrunch

Eye-witness photos and videos distributed by news wire Reuters already go through an exhaustive media verification process. Now the publisher will bring that expertise to the fight against misinformation on Facebook. Today it launches the new Reuters Fact Check business unit and blog, announcing that it will become one of the third-party partners tasked with debunking lies spread on the social network.

The four-person team from Reuters will review user generated video and photos as well as news headlines and other content in English and Spanish submitted by Facebook or flagged by the wider Reuters editorial team. They’ll then publish their findings on the new Reuters Fact Check blog, listing the core claim and why it’s false, partially false, or true. Facebook will then use those conclusions to label misinformation posts as false and downrank them in the News Feed algorithm to limit their spread.

“I can’t disclose any more about the terms of the financial agreement but I can confirm that they do pay for this service” Reuter’s Director of Global Partnerships Jessica April tells me of the deal with Facebook. Reuters joins the list of US fact-checking partners that include The Associated Press, PolitiFact, Factcheck.org, and four others. Facebook offers fact-checking in over 60 countries, though often with just one partner like Agence France-Presse’s local branches.

Reuters will have two fact-checking staffers in Washington D.C. and two in Mexico City. For reference, Media conglomerate Thomson Reuters has over 25,000 employees [Update: Reuters itself has 3,000 employees, 2,500 of which are journalists]. Reuters’ Global Head of UGC Newsgathering Hazel Baker said the fact-checking team could grow over time, as it plans to partner with Facebook through the 2020 election and beyond. The fact checkers will operate separately from, though with learnings gleaned from, the 12-person media verification team.

Reuters Fact Check will review content across the spectrum of misinformation formats. “We have a scale. On one end is content that is not manipulated but has lost context — old and recycled videos” Baker tells me, referencing lessons from the course she co-authored on spotting misinfo. Next up the scale are simplistically edited photos and videos that might be slowed down, sped up, spliced, or filtered. Then there’s staged media that’s been acted out or forged, like an audio clip recorded and maliciously attributed to a politician. Next is computer-generated imagery that can concoct content or ad fake things to a real video. “And finally there is synthetic or Deepfake video” which Baker said takes the most work to produce.

Baker acknowledged criticism of how slow Facebook is to direct hoaxes and misinformation to fact-checkers. While Facebook claims it can reduce the further spread of this content by 80% using downranking once content is deemed false, that doesn’t account for all the views it gets before its submitted and fact-checkers reach it amongst deep queues of suspicious posts for them to moderate. “One thing we have as an advantage of Reuters is an understanding of the importance of speed” Baker insists. That’s partly why the team will review content Reuters chooses based on the whole organization’s experience with fact-checking, not just what Facebook has submitted.

Unfortunately, one thing they won’t be addressing is the widespread criticism over Facebook’s policy of refusing to fact-check political ads, even if they combine sensational and defamatory misinformation paired with calls to donate to a campaign. “We wouldn’t comment on that Facebook policy. That’s ultimately up to them” Baker tells TechCrunch. We’ve called on Facebook to ban political ads, fact-check them or at least those from presidential candidates, limit microtargeting, and/or only allow campaign ads using standardized formats without room for making potentially misleading claims.

The problem of misinformation looms large as we enter the primaries ahead of the 2020 election. Rather than just being financially motivated, anyone from individual trolls to shady campaigns to foreign intelligence operatives can find political incentives for mucking with democracy. Ideally, an organization with the experience and legitimacy of Reuters would have the funding to put more than four staffers to work protecting hundreds of millions of Facebook users.

Unfortunately, Facebook is straining its bottom line to make up for years of neglect around safety. Big expenditures on content moderators, security engineers, and policy improvements depressed its net income growth from 61% year-over-year at the end of 2018 to just 7% as of last quarter. That’s a quantified commitment to improvement. Yet clearly the troubles remain.

Facebook spent years crushing its earnings reports with rapid gains in user count, revenue, and profits. But it turns out that what looked like incredible software-powered margins were propped up by an absence on spending on safeguards. The sudden awakening to the price of protecting users has hit other tech companies like Airbnb, which the Wall Street Journal reports fells from from a $200 million in yearly profit in late 2018 to a loss of $332 million a year later as it combats theft, vandalism, and discrimination.

Paying Reuters to help is another step in the right direction for Facebook that’s now two years into its fact-checking foray. It’s just too bad it started so far behind.

Source link

UK names its pick for social media ‘harms’ watchdog – TechCrunch

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms. As a result, it has named Ofcom, the existing communications watchdog, as its preferred pick for enforcing rules around “harmful speech” on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to lay a duty of care on Internet platforms, responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However, digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal as compared to content that has “potential to cause harm,” with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content,” as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints.”

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary.”

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals, which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom said it will work with the government to ensure “any regulation provides effective protection for people online” and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law,” including the ability to “investigate and apply sanctions to companies which fail to meet their obligations.”

In the DCMS committee’s final report, parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.



Source link

Will Apple, Facebook or Microsoft be the future of augmented reality? – TechCrunch

Apple is seen by some as critical to the future of augmented reality, despite limited traction for ARKit so far and its absence from smartglasses (again, so far). Yet Facebook, Microsoft and others are arguably more important to where the market is today.

All of this could see the augmented reality market growing from an active installed base approaching 900 million and over $8 billion revenue in 2019, to an augmented reality forecast over two and a half billion active installed base and nearly $60 billion revenue by 2024. 

 

Facebook: The messaging play

Facebook has talked about its long-term potential to launch smartglasses, but in 2020, its primary presence in the AR market is as a mobile AR platform (note: Facebook is also a VR market leader with Oculus). Although there are other ways to define them, mobile AR platforms can be thought of as three broad types:

  1. messaging-based (e.g. Facebook Messenger, Instagram, TikTok, Snapchat, Line)
  2. OS-based (e.g. Apple ARKit, Google ARCore)
  3. web-based (e.g. 8th Wall, Torch, others)



Source link

6 strategic stages of seed fundraising in 2020 – TechCrunch

With so many new investors, the old seed fundraise playbook needs a rewrite

Seed fundraising is rarely easy, but it certainly used to be a lot less complicated than it is today. In a simpler world, a seed investor (or maybe two) would lead a round, which meant that they would write the terms of the deal in a term sheet and then pass that document to their friends to flesh out the funds and eventually close the round. That universe of investors was small and (unfortunately) often cliquish, but everyone sort of knew each other and founders always knew at least who to start with in these early fundraises.

That world is long since gone, particularly at the seed stage. Now there are thousands of people who write checks into the earliest startup venture rounds, making it increasingly challenging for founders to find the right investors. “Pre-seed,” “seed,” “post-seed,” “seed extension,” “pre-Series A” and more terms get batted about, none of which are all that specific about what kinds of startups these investors actually invest in.

Worse, obvious metrics in the past that helped stack-rank investors — like size of potential check — have come to matter far less. In their place are more nuanced metrics like the ability to accelerate a deal to its closing. Today, your greatest lead investor may be the one who ends up writing the smallest check.

Given how much the landscape has changed, I wanted to do two things for founders thinking through a seed fundraise. First, I want to talk about how to strategize around a seed fundraise today, given the radical changes in the market over the past few years. Second, I want to talk about a couple of the archetypes of startup stages you see in the market today and discuss how to handle each of them.

This article focuses on “conventional” seed fundraising and doesn’t get into a bunch of alternative models of VC that I intend to explore in the coming weeks. If you thought traditional seed investing is complicated, wait until you see what the alternatives look like. The upshot, though, is that founders with the right strategy have more choices than ever, and, ultimately, that means there are more efficient ways to use capital to get the desired outcome for your startup.

Thinking through a seed fundraise strategy

Let’s get some preliminaries out of the way. This discussion assumes that you are a startup, looking to fundraise a seed round of some kind (i.e. you’re not looking to bootstrap your company) and that you are looking to close some sort of conventional venture capital round (i.e. not debt, but equity).

The problem with most seed fundraising advice is that it isn’t tailored to the specific stage of the startup under discussion. As I see it, there are now roughly six stages for startups before they reach scale. Those stages are:

Source link

Facebook Dating launch blocked in Europe after it fails to show privacy workings – TechCrunch

Facebook has been left red-faced after being forced to call off the launch date of its dating service in Europe because it failed to give its lead EU data regulator enough advanced warning — including failing to demonstrate it had performed a legally required assessment of privacy risks.

Yesterday, Ireland’s Independent.ie newspaper reported that the Irish Data Protection Commission (DPC) — using inspection and document seizure powers set out in Section 130 of the country’s Data Protection Act — had sent agents to Facebook’s Dublin office seeking documentation that Facebook had failed to provide.

In a statement on its website, the DPC said Facebook first contacted it about the rollout of the dating feature in the EU on February 3.

“We were very concerned that this was the first that we’d heard from Facebook Ireland about this new feature, considering that it was their intention to roll it out tomorrow, 13 February,” the regulator writes. “Our concerns were further compounded by the fact that no information/documentation was provided to us on 3 February in relation to the Data Protection Impact Assessment [DPIA] or the decision-making processes that were undertaken by Facebook Ireland.”

Facebook announced its plan to get into the dating game all the way back in May 2018, trailing its Tinder-encroaching idea to bake a dating feature for non-friends into its social network at its F8 developer conference.

It went on to test launch the product in Colombia a few months later. Since then, it’s been gradually adding more countries in South American and Asia. It also launched in the U.S. last fall after it was fined $5BN by the FTC for historical privacy lapses.

At the time of its U.S. launch, Facebook said dating would arrive in Europe by early 2020. It just didn’t think to keep its lead EU privacy regulator in the loop, despite the DPC having multiple (ongoing) investigations into other Facebook-owned products at this stage.

It’s either an extremely careless oversight or, well, an intentional fuck you to privacy oversight of its data-mining activities. (Among multiple probes being carried out under Europe’s General Data Protection Regulation, the DPC is looking into Facebook’s claimed legal basis for processing people’s data under the Facebook T&Cs, for example.)

The DPC’s statement confirms that its agents visited Facebook’s Dublin office on February 10 to carry out an inspection — in order to “expedite the procurement of the relevant documentation”. Which is a nice way of the DPC saying Facebook spent a whole week still not sending it the required information.

“Facebook Ireland informed us last night that they have postponed the roll-out of this feature,” the DPC’s statement goes on. Which is a nice way of saying Facebook fucked up and is being made to put a product rollout it’s been planning for at least half a year on ice.

The DPC’s head of communications, Graham Doyle, confirmed the enforcement action, telling us: “We’re currently reviewing all the documentation that we gathered as part of the inspection on Monday and we have posed further questions to Facebook and are awaiting the reply.”

“Contained in the documentation we gathered on Monday was a DPIA,” he added.

This begs the question why Facebook didn’t send the DPIA to the DPC on February 3. We’ve reached out to Facebook for comment and to ask when it carried out the DPIA.

Update: A Facebook spokesperson has now sent this statement:

It’s really important that we get the launch of Facebook Dating right so we are taking a bit more time to make sure the product is ready for the European market. We worked carefully to create strong privacy safeguards, and complete the data processing impact assessment ahead of the proposed launch in Europe, which we shared with the IDPC when it was requested.

We’ve asked the company why, if it’s “really important” to get the launch “right,” it did not provide the DPC with the required documentation in advance instead of the regulator having to send agents to Facebook’s offices to get it themselves. We’ll update this report with any response.

Update: A Facebook spokesman has now provided us with a second statement — in which it writes:

We’re under no legal obligation to notify the IDPC of product launches. However, as a courtesy to the Office of the Data Protection Commission, who is our lead regulator for data protection in Europe, we proactively informed them of this proposed launch two weeks in advance. We had completed the data processing impact assessment well in advance of the European launch, which we shared with the IDPC when they asked for it.

Under Europe’s GDPR, there’s a requirement for data controllers to bake privacy by design and default into products which are handling people’s information. (And a dating product clearly would be.)

While conducting a DPIA — which is a process whereby planned processing of personal data is assessed to consider the impact on the rights and freedoms of individuals — is a requirement under the GDPR when, for example, individual profiling is taking place or there’s processing of sensitive data on a large scale.

And again, the launch of a dating product on a platform such as Facebook which has hundreds of millions of regional users would be a clear-cut case for such an assessment to be carried out ahead of any launch.

In later comments to TechCrunch today, the DPC reiterated that it’s still waiting for Facebook to respond to follow-up questions it put to the company after its officers had obtained documentation related to Facebook Dating during the office inspection.

The regulator could ask Facebook to make changes to how the product functions in Europe if it’s not satisfied it complies with EU laws. So a delay to the launch may mean many things.

“We’re still examining the documentation that we have,” Doyle told us. “We’re still awaiting answers to the queries that we posed to Facebook on Tuesday [February 11]. We haven’t had any response back from them and it would be our expectation that the feature won’t be rolled out in advance of us completing our analysis.”

Asked how long the process might take, he said: “We don’t control this time process but a lot of it is dependent on how quickly we get responses to the queries that we’ve posed and how much those responses deal with the queries that we’ve raised — whether we have to go back to them again etc. So it’s just not possible to say at this stage.”

This report was updated with additional comment from Facebook and the DPC

Source link

7-month-old Simsim secures $16M for its social commerce in India – TechCrunch

Simsim, a social commerce startup in India, said on Friday it has raised $16 million in seven months of its existence as it attempts to replicate the offline retail experience in the digital world with help from influencers.

The Gurgaon-based startup said it raised $16 million across seed, Series A and Series B financing rounds from Accel Partners, Shunwei Capital and Good Capital. (The most recent round, Series B, was of $8 million in size.)

“Despite e-commerce players bandying out major discounts, most of the sales in India are still happening in brick-and-mortar stores. There is a simple reason for that: Trust,” explained Amit Bagaria, co-founder of Simsim, in an interview with TechCrunch.

The vast majority of Indians are still not comfortable with reading descriptions — and that too in English, he said.

Simsim is taking a different approach to tackle this opportunity. On its app, users watch short-videos produced in local languages by influencers who apply beauty products or try out dresses and explain the ins-and-outs of the products. Below the video, the items appear as they are being discussed and users can tap on them to proceed with the purchase.

“Videos help in educating users about the category. So many of them may not have used face masks, for instance. But it becomes easier when the community influencer is able to show them how to apply it,” said Rohan Malhotra, managing partner at Good Capital, in an interview with TechCrunch.

Influencers typically sell a range of items and users can follow them to browse through the past catalog and stay on top of future sales, said Bagaria, who previously worked at the e-commerce venture of financial services firm Paytm .

“This interactiveness is enabling Simsim to mimic the offline stores experience,” said Malhotra, who is one of the earliest investors in Meesho, also a social commerce startup that last year received backing from Facebook and Prosus Ventures.

“The beauty to me of social commerce is that you’re not changing consumer behavior. People are used to consuming on WhatsApp — and it’s working for Meesho. Over here, you are getting the touch and feel experience and are able to mentally picture the items much clearer,” he said.

Simsim handles the inventories, which it sources from manufacturers and brands, and it works with a number of logistics players to deliver the products.

“Several Indian cities and towns are some of the biggest production hubs of various high-quality items. But these people have not been able to efficiently sell online or grow their network in the offline world. On Simsim, they are able to work with influencers and market their products,” said Bagaria.

The platform today works with more than 1,200 influencers, who get a commission for each item they sell, said Bagaria, who plans to grow this figure to 100,000 in the coming years.

Even as Simsim, which has been open to users for six months, is still in its nascent stage, it is beginning to show some growth. It has amassed over a million users, most of whom live in small cities and towns, and it is selling thousands of items each day, said Bagaria.

He said the platform, which currently supports Hindi, Tamil, Bengali and English, will add more than a dozen additional languages by the end of the year. In about a month, Simsim also plans to start showing live videos, where influencers will be able to answer queries from users.

A handful of startups have emerged in India in recent years that are attempting to rethink the e-commerce market in the nation. Amazon and Walmart, both of which have poured billions of dollars in India, have taken a notice too. Both of them have added support for Hindi in the last two years and have made several more tweaks to their platforms to expand their reach.

Source link

Flip raises $4M to pounce on the growing sector of employee messaging – TechCrunch

We’re all familiar with text messaging groups for multi-person coordination. I’ve lost count of how many WhatsApp, Telegram and Facebook messenger groups I’m on! Other apps like Threema have started to be used in a business context and startups like Staffbase have decided to become full-blown “workforce messaging” platforms. The thinking now amongst investors is that messaging is about to explode in all sorts of verticals, making it a rich seam to mine.

In that vein, Flip, a Stuttgart, Germany-based employee messenger app, has now raised €3.6M ($4M) from LEA Partners and Cavalry Ventures, together with Plug and Play Ventures and Business Angels such as Jürgen Hambrecht (Chairman of the Supervisory Board BASF), Prof. Dr. Kurt Lauk (Chairman of the Supervisory Board Magna International), Florian Buzin (Founder Starface) and Andreas Burike (HR Business Angel). The capital raised will fuel the expansion of the team and the development of further markets.

Founded in 2018, the startup offers companies a platform that connects and informs employees across all levels in a legally compliant manner.

That last part is important. The application is based on a GDPR-compliant data and employee protection concept, which was validated jointly with experts and works councils of several DAX companies. It also integrates with many existing corporate IT infrastructures.

The startup has now secured customers including Porsche, Bauhaus, Edeka, Junge IG Metall and Wüstenrot & Württembergische. Parts of Sparkasse and Volksbank are also among the customer base. Deutsche Telekom is also a partner.

“Flip is the easiest solution for internal communication in companies of all sizes,” Flip founder and CEO Benedikt Ilg said in a statement.

Bernhard Janke from LEA Partners said: “As a young company, Flip has been able to attract prestigious clients. The lean solution can be integrated into existing IT systems and existing communication processes, even in large organizations. With the financing round, we want to further expand the team and product and thus support the founders in their vision of making digital workforce engagement accessible to all companies.”

Claude Ritter of Cavalry Ventures said: “We are convinced that Flip is setting new standards in this still young market with its safe, lightweight and extremely powerful product.”

Source link