It’s time for tech startups to get political – TechCrunch

Between 2005 and 2018, the five biggest U.S. tech firms collectively spent more than half a billion dollars lobbying federal policymakers. But they shelled out even more in 2019: Facebook boosted its lobbying budget by 25%, while Amazon hiked its political outlay by 16%. Together, America’s biggest tech firms spent almost $64 million in a bid to shape federal policies.

Clearly, America’s tech giants feel they’re getting value for their money. But as CEO of Boundless, a 40-employee startup that doesn’t have millions of dollars to invest in political lobbying, I’m proposing another way. One of the things we care most about at Boundless is immigration. And while we’ve yet to convince Donald Trump and Stephen Miller that immigrants are a big part of what makes America great — hey, we’re working on it! — we’ve found that when you have a clear message and a clear mission, even a startup can make a big difference.

So how can scrappy tech companies make a splash in the current political climate? Here are some guiding principles we’ve learned.

1) Speak out

You can’t make a difference if you don’t make some noise. A case in point: Boundless is spearheading the business community’s pushback against the U.S. Department of Homeland Security’s “public charge rule.” This sweeping immigration reform would preclude millions of people from obtaining U.S. visas and green cards — and therefore make it much harder for American businesses to hire global talent — based on a set of new, insurmountable standards. We’re doing that not by cutting checks to K Street but by using our own expertise, creativity and people skills — the very things that helped make our company a success in the first place.

By leveraging our unique strengths — including our own proprietary data — we’ve been able to put together a smart, business-focused amicus brief urging courts to strike down the public charge rule. And because we combine immigration-specific expertise with a real understanding of the issues that matter most to tech companies, we’ve been able to convince more than 100 other firms  — such as Microsoft, Twitter, Warby Parker, Levi Strauss & Co. and Remitly — to cosign our amicus brief. Will that be enough to persuade the courts and steer federal policy in immigrants’ favor? The jury’s still out. But whatever happens, we take satisfaction in knowing that we’re doing everything we can on behalf of the entire immigrant community, not just our customers, in defense of a cause we’re passionate about.

2) Take a stand

Taking a stand is risky, but staying silent is a gamble, too: Consumers are increasingly socially conscious, and almost nine out of 10 said in one survey that they prefer to buy from brands that take active steps to support the causes they care about. It depends a bit on the issue, though. One survey found that trash-talking the president will win you brownie points from millennials but cost you support among Baby Boomers, for instance.

So pick your battles — but remember that media-savvy consumers can smell a phony a mile off. It’s important to choose causes you truly stand behind and then put your money where your mouth is. At Boundless, we do that by hiring a diverse workforce — not just immigrants, but also women (we’re over 60%), people of color (35%) and LGBTQ+ (15%) — and putting time and energy into helping them succeed. Figure out what authenticity looks like for your company, and make sure you’re living your values as well as just talking about them.

3) Band together

Tech giants might have a bigger megaphone, but there are a lot of startups in our country, and quantity has a quality all its own. In fact, the Small Business Administration reported in 2018 that there are 30.2 million small businesses in the United States, 414,000 of which are classified as “startups.” So instead of trying to shout louder, try forging connections with other smart, up-and-coming companies with unique voices and perspectives of their own.

At Boundless, we routinely reach out to the other startups that have received backing from our own investor groups — national networks such as Foundry Group, Trilogy Equity Partners, Pioneer Square Labs, Two Sigma Ventures and Flybridge Capital Partners — in the knowledge that these companies will share many of our values and be willing to listen to our ideas.

For startups, the venture capitalists, accelerators and incubators that helped you launch and grow can be an incredible resource: Leverage their expertise and Rolodexes to recruit a posse of like-minded startups and entrepreneurs that can serve as a force multiplier for your political activism. Instead of taking a stand as a single company, you could potentially rally dozens of companies — from a range of sectors and unique weights in their fields — on board for your advocacy efforts.

4) Use your superpowers

Every company has a few key superpowers, and the same things that make you a commercial success can help to sway policymakers, too. Boundless uses data and design to make the immigration process more straightforward, and number-crunching and messaging skills come in handy when we’re doing advocacy work, too.

Our data-driven report breaking down naturalization trends and wait times by location made a big splash, for instance, and not just in top-ranked Cleveland. We presented our findings to Congress, and soon afterward some Texas lawmakers began demanding reductions in wait times for would-be citizens. We can’t prove our advocacy was the deciding factor, but it’s likely that our study helped nudge them in the right direction.

5) Work the media

Whether you’re Bill Gates or a small-business owner, if you’re quoted in The New York Times, then your voice will reach the same people. Reporters love to feel like they’re including quotes from the “little guy,” so make yourself accessible, and learn to give snappy, memorable quotes to reporters, and you’ll soon find that they keep you on speed dial.

Our phones rang off the hook when Trump tried to push through a healthcare mandate by executive order, for instance, and our founders were quoted by top media outlets — from Reuters to Rolling Stone. It takes a while to build media relationships and establish yourself as a credible source, but it’s a great way to win national attention for your advocacy.

6) Know your lawmakers

To make a difference, you’ll need allies in the corridors of power. Reach out to your senators and congresspeople, and get to know their staffers, too. Working in politics is often thankless, and many aides love to hear from new voices, especially ones who are willing to stake out controversial positions on big issues, sound the alarm on bad policies or help move the Overton window to enable better solutions.

We’ve often found that prior to hearing from us, lawmakers simply hadn’t considered the special challenges faced by smaller tech companies, such as lack of internal legal, human and financial resources, to comply with various regulations. And those lawmakers come away from our meetings with a better understanding of the need to craft straightforward policies that won’t drown small businesses in red tape.

Political change doesn’t just happen in the Capital Beltway, so make a point of reaching out to your municipal and state-level leaders, too. In 2018, Boundless pitched to the Civic I/O Mayors Summit at SXSW because we knew that municipal leaders played a critical role in welcoming new Americans into our communities. Local policies and legislation can have a big impact on startups, and the support of local leaders remains a critical foundation for the kinds of change we want to see made to the U.S. immigration system.

Take the next step

It’s easy to make excuses or expect someone else to advocate on your behalf. But if there’s something you think the government could be doing better, then you have an obligation to use your company’s energy, talent and connections to push back and create momentum for reform. Sure, it would be nice to splash money around and hire a phalanx of lobbyists to shape public policy — but it’s perfectly possible to make a big difference without spending a dime.

But first, figure out what you stand for and what strengths and superpowers you can leverage to bear the problems you and your customers face. Above all, don’t be afraid to take a stand.



Source link

Facebook’s dodgy defaults face more scrutiny in Europe – TechCrunch

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.

At the same time a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.

Lack of transparency

The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the ‘free’ service, and fined Facebook €5M for failing to properly inform users how their information would be used commercially.

In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service ‘is free and always will be’ — but finds users are still not being informed, “with clarity and immediacy”, about how the tech giant monetizes their data.

The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.

In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:

We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.

Last year Italy’s data protection agency also fined Facebook $1.1M — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.

Dodgy defaults

In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is ‘free and always will be’ — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.

A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.

However that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.

Significantly vzbv has won the right to bring data protection related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market. 

This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).

But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.

The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default

The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content”, and another stipulation that users agree in advance to all future changes to the policy.

Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”

We’ve reached out to Facebook for a response.

Source link

US regulators need to catch up with Europe on fintech innovation  – TechCrunch

Fintech companies are fundamentally changing how the financial services ecosystem operates, giving consumers powerful tools to help with savings, budgeting, investing, insurance, electronic payments and many other offerings. This industry is growing rapidly, filling gaps where traditional banks and financial institutions have failed to meet customer needs.

Yet progress has been uneven. Notably, consumer fintech adoption in the United States lags well behind much of Europe, where forward-thinking regulation has sparked an outpouring of innovation in digital banking services — as well as the backend infrastructure onto which products are built and operated.

That might seem counterintuitive, as regulation is often blamed for stifling innovation. Instead, European regulators have focused on reducing barriers to fintech growth rather than protecting the status quo. For example, the U.K.’s Open Banking regulation requires the country’s nine big high-street banks to share customer data with authorized fintech providers.

The EU’s PSD2 (Payment Services Directive 2) obliges banks to create application programming interfaces (APIs) and related tools that let customers share data with third parties. This creates standards that level the playing field and nurture fintech innovation. And the U.K.’s Financial Conduct Authority supports new fintech entrants by running a “sandbox” for software testing that helps speed new products into service.

Regulations, if implemented effectively as demonstrated by those in Europe, will lead to a net positive to consumers. While it is inevitable that regulations will come, if fintech entrepreneurs take the action to engage early and often with regulators, it will ensure that the regulations put in place support innovation and ultimately benefit the consumer.

Source link

Twitter DMs now have emoji reactions – TechCrunch

Twitter is pouring a little more fuel on the messaging fire. It’s added a heart+ button to its direct messaging interface which lets users shortcut to a pop-up menu of seven emoji reactions so they can quickly express how they’re feeling about a missive.

Emoji reactions can be added to text or media messages — either via the heart+ button or by double tapping on the missive to bring up the reaction menu.

The social network teased the incoming tweak a few hours earlier in a knowing tweet about sliding into DMs that actually revealed the full line-up of reaction emojis — which, in text form, can be described as: Crying lol; shocked/surprised; actually sad; heart; flame; thumb-up and thumb-down.

 

So instead of a smilie face Twitter users are being nudged towards an on-brand-message Twitter heart, in keeping with its long-standing pick for a pleasure symbol.

The flame is perhaps slightly surprising for a company that’s publicly professed to wanting to improve the conversational health of its platform.

If it’s there to stand in for appreciation a clap emoji could surely have done the trick. Whereas flame wars aren’t typically associated with constructive speech. But — hey — the flame icon does catch the eye…

Twitter is late to this extroverted party. Rival messaging platforms such as Apple iMessage and Facebook Messenger have had emoji reactions for years, whereas Twitter kept things relatively minimal and chat-focused in its DM funnel — to its credit (at least if you value the service as, first and foremost, an information network).

So some might say Twitter jumping on the emoji reaction bandwagon now is further evidence it’s trying to move closer to rivals like Facebook as a product. (See also: Last year’s major desktop product redesign — which has been compared in look and feel to the Facebook News Feed.)

But if so this change is at least a relatively incremental one.

Twitter users have also, of course, always been able to react to an incoming DM by sending whatever emoji or combination of emoji they prefer as a standard reply. Though now lazy thumbs have a shortcut to emote — so long as they’re down with Twitter’s choice of icons.

In an FAQ about the new DM emoji reactions, Twitter notes that emoting will by default send a notification to all conversation participants “any time a new reaction is added to a message”.

So, yes, there’s attention-spamming potential aplenty here…

Adjust your notification and DM settings accordingly.

You can only choose one reaction per missive. Each symbol is displayed under the message/media with a count next to it — to allow for group tallies to be totted up. 

NB: Clicking on another symbol will swap out the earlier one — generating, er, more notification spam. And really annoying people could keep flipping their reaction to generate a real-time emoji streaming game of notification hell (hi growth hackers!) with folks they’ve been DMing with. So that’s another good reason to lock down your Twitter settings.

Users still running older version of Twitter’s apps which don’t support message reactions will see a standard text emoji message per reaction sent (see examples below). This kinda confusingly makes it look like the reaction sender has actually been liking/flaming their own stuff. So all the more reason to not be spammy about emoji.



Source link

Yo Facebook & Instagram, stop showing Stories reruns – TechCrunch

If I watch a Story cross-posted from Instagram to Facebook on either of the apps, it should appear as “watched” at the back of the Stories row on the other app. Why waste my time showing me Stories I already saw?

It’s been over two years since Instagram Stories launched cross-posting to Stories. Countless hours of each feature’s 500 million daily users have been squandered viewing repeats. Facebook and Messenger already synchronized the watched/unwatched state of Stories. It’s long past time that this was expanded to encompass Instagram.

I asked Facebook and Instagram if it had plans for this. A company spokesperson told me that it built cross-posting to make sharing easier to people’s different audiences on Facebook and Instagram, and it’s continuing to explore ways to simplify and improve Stories. But they gave no indication that Facebook realizes how annoying this is or that a solution is in the works.

The end result if this gets fixed? Users would spend more time watching new content, more creators would feel seen, and Facebook’s choice to jam Stories in all its apps would fee less redundant and invasive. If I send a reply to a Story on one app, I’m not going to send it or something different when I see the same Story on the other app a few minutes or hours later. Repeated content leads to more passive viewing and less interactive communication with friends, despite Facebook and Instagram stressing that its this zombie consumption that’s unhealthy.

The only possible downside to changing this could be fewer Stories ad impressions if secondary viewings of peoples’ best friends’ Stories keep them watching more than new content. But prioritizing making money over the user experience is again what Mark Zuckerberg has emphasized is not Facebook’s strategy.

There’s no need to belabor the point any further. Give us back our time. Stop the reruns.

Source link

Facebook speeds up AI training by culling the weak – TechCrunch

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Erik Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9% success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one-bedroom apartment, it’s much easier to do that than navigate a 10-bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done, and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijmans. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system, in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijmans explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9% reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western-style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed toward the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijmans. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large-scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore, now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

Source link

India likely to force Facebook, WhatsApp to identify the originator of messages – TechCrunch

New Delhi is inching closer to recommending regulations that would require social media companies and instant messaging app providers operating in the nation to help law enforcement agencies identify users who have posted content — or sent messages — it deems questionable, two people familiar with the matter told TechCrunch.

India will submit the suggested change to the local intermediary liability rules to the nation’s apex court later this month. The suggested change, the conditions of which may be altered before it is finalized, currently says that law enforcement agencies will have to produce a court order before exercising such requests, sources who have been briefed on the matter said.

But regardless, asking companies to comply with such a requirement would be “devastating” for international social media companies, a New Delhi-based policy advocate told TechCrunch on the condition of anonymity.

WhatsApp executives have insisted in the past that they would have to compromise end-to-end encryption of every user to meet such a demand — a move they are willing to fight over.

The government did not respond to a request for comment Tuesday evening. Sources spoke under the condition of anonymity as they are not authorized to speak to media.

Scores of companies and security experts have urged New Delhi in recent months to be transparent about the changes it planned to make to the local intermediary liability guidelines.

The Indian government proposed (PDF) a series of changes to its intermediary liability rules in late December 2018 that, if enforced, would require to make significant changes millions of services operated by anyone, from small and medium businesses to large corporate giants such as Facebook and Google.

Among the proposed rules, the government said that intermediaries — which it defines as those services or functions that facilitate communication between two or more users and have five million or more users in India — will have to, among other things, be able to trace the originator of questionable content to avoid assuming full liability for their users’ actions.

At the heart of the changes lies the “safe harbor” laws that technology companies have so far enjoyed in many nations. The laws, currently applicable in the U.S. under the Communications Decency Act and India under its 2000 Information Technology Act, say that tech platforms won’t be held liable for the things their users share on the platform.

Many stakeholders have said in recent months that the Indian government was keeping them in the dark by not sharing the changes it was making to the intermediary liability guidelines.

Nobody outside of a small government circle has seen the proposed changes since January of last year, said Shashi Tharoor, one of India’s most influential opposition politicians, in a recent interview with TechCrunch.

Software Freedom and Law Centre, a New Delhi-based digital advocacy organization, recommended last week that the government should consider removing the traceability requirement from the proposed changes to the law as it was “technically impossible to satisfy for many online intermediaries.”

“No country is demanding such a broad level of traceability as envisaged by the Draft Intermediaries Guidelines,” it added.

TechCrunch could not ascertain other changes the government is recommending.

Source link

Fable Studio founder Edward Saatchi on designing virtual beings – TechCrunch

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 3 of 3: designing virtual companions

In this discussion, Fable CEO Edward Saatchi addresses the technical and artistic dynamics of virtual companions: AIs created to establish one-to-one relationships with consumers. After mobile, Saatchi says he believes such virtual beings will act as the next paradigm for human-computer interaction.

Source link

TechCrunch’s Top 10 investigative reports from 2019 – TechCrunch

Facebook spying on teens, Twitter accounts hijacked by terrorists, and sexual abuse imagery found on Bing and Giphy were amongst the ugly truths revealed by TechCrunch’s investigating reporting in 2019. The tech industry needs more watchdogs than ever as its size enlargens the impact of safety failures and the abuse of power. Whether through malice, naivety, or greed, there was plenty of wrongdoing to sniff out.

Led by our security expert Zack Whittaker, TechCrunch undertook more long-form investigations this year to tackle these growing issues. Our coverage of fundraises, product launches, and glamorous exits only tell half the story. As perhaps the biggest and longest running news outlet dedicated to startups (and the giants they become), we’re responsible for keeping these companies honest and pushing for a more ethical and transparent approach to technology.

If you have a tip potentially worthy of an investigation, contact TechCrunch at tips@techcrunch.com or by using our anonymous tip line’s form.

Image: Bryce Durbin/TechCrunch

Here are our top 10 investigations from 2019, and their impact:

Facebook pays teens to spy on their data

Josh Constine’s landmark investigation discovered that Facebook was paying teens and adults $20 in gift cards per month to install a VPN that sent Facebook all their sensitive mobile data for market research purposes. The laundry list of problems with Facebook Research included not informing 187,000 users the data would go to Facebook until they signed up for “Project Atlas”, not receiving proper parental consent for over 4300 minors, and threatening legal action if a user spoke publicly about the program. The program also abused Apple’s enterprise certificate program designed only for distribution of employee-only apps within companies to avoid the App Store review process.

The fallout was enormous. Lawmakers wrote angry letters to Facebook. TechCrunch soon discovered a similar market research program from Google called Screenwise Meter that the company promptly shut down. Apple punished both Google and Facebook by shutting down all their employee-only apps for a day, causing office disruptions since Facebookers couldn’t access their shuttle schedule or lunch menu. Facebook tried to claim the program was above board, but finally succumbed to the backlash and shut down Facebook Research and all paid data collection programs for users under 18. Most importantly, the investigation led Facebook to shut down its Onavo app, which offered a VPN but in reality sucked in tons of mobile usage data to figure out which competitors to copy. Onavo helped Facebook realize it should acquire messaging rival WhatsApp for $19 billion, and it’s now at the center of anti-trust investigations into the company. TechCrunch’s reporting weakened Facebook’s exploitative market surveillance, pitted tech’s giants against each other, and raised the bar for transparency and ethics in data collection.

Protecting The WannaCry Kill Switch

Zack Whittaker’s profile of the heroes who helped save the internet from the fast-spreading WannaCry ransomware reveals the precarious nature of cybersecurity. The gripping tale documenting Marcus Hutchins’ benevolent work establishing the WannaCry kill switch may have contributed to a judge’s decision to sentence him to just one year of supervised release instead of 10 years in prison for an unrelated charge of creating malware as a teenager.

The dangers of Elon Musk’s tunnel

TechCrunch contributor Mark Harris’ investigation discovered inadequate emergency exits and more problems with Elon Musk’s plan for his Boring Company to build a Washington D.C.-to-Baltimore tunnel. Consulting fire safety and tunnel engineering experts, Harris build a strong case for why state and local governments should be suspicious of technology disrupters cutting corners in public infrastructure.

Bing image search is full of child abuse

Josh Constine’s investigation exposed how Bing’s image search results both showed child sexual abuse imagery, but also suggested search terms to innocent users that would surface this illegal material. A tip led Constine to commission a report by anti-abuse startup AntiToxin (now L1ght), forcing Microsoft to commit to UK regulators that it would make significant changes to stop this from happening. However, a follow-up investigation by the New York Times citing TechCrunch’s report revealed Bing had made little progress.

Expelled despite exculpatory data

Zack Whittaker’s investigation surfaced contradictory evidence in a case of alleged grade tampering by Tufts student Tiffany Filler who was questionably expelled. The article casts significant doubt on the accusations, and that could help the student get a fair shot at future academic or professional endeavors.

Burned by an educational laptop

Natasha Lomas’ chronicle of troubles at educational computer hardware startup pi-top, including a device malfunction that injured a U.S. student. An internal email revealed the student had suffered a “a very nasty finger burn” from a pi-top 3 laptop designed to be disassembled. Reliability issues swelled and layoffs ensued. The report highlights how startups operating in the physical world, especially around sensitive populations like students, must make safety a top priority.

Giphy fails to block child abuse imagery

Sarah Perez and Zack Whittaker teamed up with child protection startup L1ght to expose Giphy’s negligence in blocking sexual abuse imagery. The report revealed how criminals used the site to share illegal imagery, which was then accidentally indexed by search engines. TechCrunch’s investigation demonstrated that it’s not just public tech giants who need to be more vigilant about their content.

Airbnb’s weakness on anti-discrimination

Megan Rose Dickey explored a botched case of discrimination policy enforcement by Airbnb when a blind and deaf traveler’s reservation was cancelled because they have a guide dog. Airbnb tried to just “educate” the host who was accused of discrimination instead of levying any real punishment until Dickey’s reporting pushed it to suspend them for a month. The investigation reveals the lengths Airbnb goes to in order to protect its money-generating hosts, and how policy problems could mar its IPO.

Expired emails let terrorists tweet propaganda

Zack Whittaker discovered that Islamic State propaganda was being spread through hijacked Twitter accounts. His investigation revealed that if the email address associated with a Twitter account expired, attackers could re-register it to gain access and then receive password resets sent from Twitter. The article revealed the savvy but not necessarily sophisticated ways terrorist groups are exploiting big tech’s security shortcomings, and identified a dangerous loophole for all sites to close.

Porn & gambling apps slip past Apple

Josh Constine found dozens of pornography and real-money gambling apps had broken Apple’s rules but avoided App Store review by abusing its enterprise certificate program — many based in China. The report revealed the weak and easily defrauded requirements to receive an enterprise certificate. Seven months later, Apple revealed a spike in porn and gambling app takedown requests from China. The investigation could push Apple to tighten its enterprise certificate policies, and proved the company has plenty of its own problems to handle despite CEO Tim Cook’s frequent jabs at the policies of other tech giants.

Bonus: HQ Trivia employees fired for trying to remove CEO

This Game Of Thrones-worthy tale was too intriguing to leave out, even if the impact was more of a warning to all startup executives. Josh Constine’s look inside gaming startup HQ Trivia revealed a saga of employee revolt in response to its CEO’s ineptitude and inaction as the company nose-dived. Employees who organized a petition to the board to remove the CEO were fired, leading to further talent departures and stagnation. The investigation served to remind startup executives that they are responsible to their employees, who can exert power through collective action or their exodus.

If you have a tip for Josh Constine, you can reach him via encrypted Signal or text at (585)750-5674, joshc at TechCrunch dot com, or through Twitter DMs



Source link

LaunchDarkly CEO Edith Harbaugh explains why her company raised another $54M – TechCrunch

‘This is the very, very beginning of something much bigger’

This week, LaunchDarkly announced that it has raised another $54 million. Led by Bessemer Venture Partners and backed by the company’s existing investors, it brings the company’s total funding up to $130 million.

For the unfamiliar, LaunchDarkly builds a platform that allows companies to easily roll out new features to only certain customers, providing a dashboard for things like “canary launches” (pushing new stuff to a small group of users to make sure nothing breaks) or launching a feature only in select countries or territories. By productizing an increasingly popular development concept (“feature flagging”) and making it easier to toggle new stuff across different platforms and languages, the company is quickly finding customers in companies that would rather not spend time rolling their own solutions.

I spoke with CEO and co-founder Edith Harbaugh, who filled me in on where the idea for LaunchDarkly came from, how their product is being embraced by product managers and marketing teams and the company’s plans to expand with offices around the world. Here’s our chat, edited lightly for brevity and clarity.



Source link