Technology and Silicon Valley have been one of the main topics dominating the news this year and they were certainly one of the most polarizing on social media and in the public debate.
From fake news and political elections around the world to artificial intelligence and blockchain, everybody was part of the conversation online and offline.
While the term Fake News was one of the most popular of 2018, this year’s Word of the Year according to Dictionary.com was:
Misinformation.
“The rampant spread of misinformation poses new challenges for navigating life in 2018,” reads a statement. “As a dictionary, we believe understanding the concept is vital to identifying misinformation in the wild, and ultimately curbing its impact.”
2018 was the year of tech hearings on Capitol Hill. Even Facebook’s founder and CEO Mark Zuckerberg, the company’s Chief Operating Officer Sheryl Sandberg, and Twitter’s co-founder and CEO Jack Dorsey were all grilled this year by US senators and congresspeople. Zuckerberg even appeared in congressional hearings in London and in Brussels. Zuckerberg even testified in Brussels in front of European Parliament President Antonio Tajani and a EU Parliament Conference of Presidents.
The more interesting comments, however, came from the hearings with lower level officials of social media companies. During a July hearing of the House Judiciary Committee on Capitol Hill, an interesting comment by Rep. Pramila Jayapal caught my attention.
“The challenge here is that it is difficult to determine exactly what may qualify as false news. But the bigger problem to me is that somehow we get to a standard that truth is relative,” she said in regards to the current debate on fake news and misinformation on social media.
Truth is not relative. An apple is an apple. It can’t be a tomato tomorrow and a pear yesterday. It is an apple.
In response, Monika Bickert, Head of Global Policy Management at Facebook, one of the three witnesses at the hearing alongside Google and Twitter, explained that there are a couple of different things that her platform does to face the issue.
Facebook, in fact, acknowledges that “the majority of false news that we see on social media tends to come for spammers and financially motivated actors.”
“That violates our policies,” said Bickert. “We have technical means of trying to detect those accounts and remove them. We made a lot of progress on the past few years.”
She added: “Then there’s content that people might disagree about, or it may be widely alleged to be false. We definitely heard feedback that people don’t want to have private companies in the business of determining what is true and what is false. But what we know we can do is counter virality, if we think that there are signals — like third-party fact checkers — that the content is false, by demoting a post and by providing additional information to people so that they can see whether or not the article is consistent with what other mainstream sources around the Internet are also saying.”
In answering a previous question on the nature of fake news by Rep. Ted Poe, Bickert stressed that “we don’t have a policy of removing fake news.”
She added: “What we do is that, if people flag content, as being false, or if our technology, or if comments and other signals, detect that content might be false, then we send it to these fact-checking organizations.”
The fact-checking organizations with which Facebook works were mentioned earlier in the hearing. They include Associated Press (AP), PolitiFact, The Weekly Standard, FactCheck.org, and Snopes.
“If they rate the content as false — and none of them rate as true — then we would reduce the distribution of the content and add the related articles,” Bickert said.
As the Congressman was pressing, she stressed again: “Sharing information that is false does not violates our policies.”
But the question about the nature of fake news came about a few times in the hearing and Google’s Juniper Downs, Global Head of Public Policy and Government Relations at Youtube, identified a spectrum, in response to Rep. Mike Johnson.
“Fake news is a term used to define a spectrum of content,” she answered. “On one end of the spectrum, we have malicious, deceptive content that is often spread by troll farms. That content would violate our policies and we would act quickly to remove the content and/or the accounts that are spreading it. In the middle, you have misinformation that may be low quality. This is where our algorithm kicks in to promote more authoritative content and demote lower quality content. And then, of course, you’ve also heard the term referred to mainstream media, in which case we do nothing. We don’t embrace the term in that context.”
Strongly connected to the issue of fake news is that of the relationship between tech and politics, as well as the role of social media platforms in political elections around the world.
The midterm elections in the U.S. is still very much part of the news cycle, while the European elections are quickly approaching with more than 350 million EU citizens going to the polls in May.
Social media companies have taken steps to improve their policies governing political advertising and misinformation, with a focus on state and non-state actors trying to influence elections around the world.
In September, during an event hosted by the Bipartisan Policy Center, Facebook’s global politics and government outreach director Katie Harbath said that while Facebook has taken steps, it’s viewing the 2018 midterms as “a good milestone” before the 2020 U.S. presidential election.
“We’re not going to really be able to know the effect of the changes that we and other platforms are making, honestly, until after 2020,” Harbath was quoted as saying by FCW. “We are barely at the start of this.… This is always going to be a continuous thing for us to try to figure out.”
“As part of our efforts to prevent interference on Facebook during elections, we are in regular contact with law enforcement, outside experts and other companies around the world,” wrote Nathaniel Gleicher, Facebook’s Head of Cybersecurity Policy, in a statement. “These partnerships, and our own investigations, have helped us find and remove bad actors from Facebook on many occasions in the last year.”
At Twitter, Senior Public Policy Manager Bridget Coyne explained in a statement that “Over the last several months, we’ve taken significant steps to safeguard the integrity of conversations surrounding the US elections by reducing the spread of disinformation, strengthening outreach to government stakeholders, and streamlining our enforcement processes.”
She added: “We are committed to serving the public conversation about elections on our platform.”
Looking ahead, Google’s Lie Junius, Director of EU Public Policy and Government Relations, explained that in Europe, in preparation of the elections for the renewal of the EU Parliament, “To support this democratic process, we’re rolling out products and programs to help people get the information they need to cast their votes.”
Google’s efforts include getting voters the information they need to navigate the electoral process; helping voters better understand the political advertising they see with more transparent ads; and protecting election information online.
Tweet via Mary Ritti, Snap Inc VP of Communications.
A look at Snap, the company behind the popular social media service Snapchat, showed great results following the platform’s voter registration and participation efforts in US.
For the midterms, Snapchat added polling locations to its Snap Map, a feature within its app that shows events happening all around the world in real-time.
This was part of a greater push to get Snap users civically-engaged ahead of this year’s midterm elections. More than 400,000 users, mainly between 18 to 24, register to vote using the app thanks to a partnership with Turbo Vote.
The backlash of the Cambridge Analytica scandal has had a deep impact not only on users, now more than ever aware of how their data is used on social media, but also on the tech industry and government legislators around the world.
“Tech is definitely about to get regulated. And probably for the best.”
This was a tweet from Aaron Levie, cofounder, CEO, and self-defined Lead Magician at Box. It was his first comment after the news in march that Cambridge Analytica illegally harvested 50 millions of Facebook profiles of US voters.
A few days later, Levie commented again on Twitter:
“The responsibility of tech companies grows exponentially,” he pointed out while admitting that saying that they are “merely platforms and pipes” cannot be a sustainable argument any longer.
Even before the Facebook and Cambridge Analytica scandal broke, Sir Tim Berners-Lee, the inventor of the world wide web and founder of the The Web Foundation, called for large technology firms to be regulated to prevent the web from being “weaponized at scale”.
Berners-Lee’s statement was part of an open letter to mark the 29th anniversary of the world wide web. Prophetic? Or just common sense?
“In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data,” he wrote, pointing out that the current response of lawmakers has been to look “to the platforms themselves for answers” — which he argues is neither fair nor likely to be effective.
The Internet “has been spectacular in so many ways,” Joseph Lubin, founder of startup ConsenSys and co-founder of ethereum, said in his keynote address during the Ethereal Summit in Brooklyn in May. “It has transformed global society, but it’s broken.”
In his keynote presentation, Lubin pointed out how the technology behind the Internet and the World Wide Web “is getting stretched to its limits” and security is one of the main issues.
“Its [the Internet] foundations were formed decades ago, in naive times,” Lubin highlighted in one of his slides. “Security evolved as a patchwork later.”
Blockchain is going to be a revolution in IT security because every transaction against your infrastructure is a strongly and cryptographically authenticated and granularly authorized.
Joseph Lubin on stage at ConsenSys’ Ethereal Summit 2018.
“You have a force for universal disintermediation” enabling content creators, resource providers, service providers to directly access consumers with little to none intermediary extracting value, without adding any commensurate value.
ConsenSys has been exploring these approaches already with platforms like Ujo for music, Civil: Self-Sustaining Journalism for news, and Cellarius for collaborative, fan-crowdsourced, and fan-curated stories on many formats. But also in the area of services and resources, linking them directly to consumers, with projects like Golem Project, SONM, Kauri Team, Swarm, Storj Labs, Pangea, and many others.
This constitute Web 3.0 for ConsenSys and its mesh of projects, with trusted transactions, automated agreements, and smart software objects on Ethereum, a single world computer, a single execution space; as well as other protocols like decentralized storage, decentralized bandwidth, and heavy compute.
All of this is going to enable us as people and corporation to interoperate much more fluidly.
In a way, Web 3.0 brings the Internet back to its beginning as a decentralized architecture.
“But efficiencies and drive for wealth led to siloed, walled gardens,” Lubin highlighted in a slide. “This was due to a lack of mechanism for shared ownership of open platforms.”
Web 3.0 is what we’re just of the cusp of. And Web 4.0 is going to be very interesting.
Lubin mentioned how Web 4.0 is “what we’re just starting to think about.” It is a system where artificial intelligent agents are empowered by value through blockchain and tokens. “The Internet of the machines economy is going to be very interesting and definitely coming to a blockchain near you,” he said.
Essentially, in 1–2–3 years I think it’s going to feel like blockchain is everywhere. It’s still not true right now.
Artificial intelligence and machine learning represent one of the most exciting trends in technology: virtual assistants, autonomous cars, self-learning algorithms. These are challenges many tech companies and startups at looking at to push innovation forward. But the number of AI critics is multiplying as these technologies have also a dark side.
2018 has been a key year for artificial intelligence as concerns about the repercussions of AI on society and human activity are mounting, even within Big Tech. Trust is also a big issue as some believe that the recent rush towards AI may suggest that we are turning over the keys of reason to machines.
However, companies like Google, Microsoft, and Amazon are now exploring ways to tap into AI for social good and humanitarian projects and aid.
Google was the latest to enter this space of what some refers to as AI for good. The company announced in November its AI Impact Challenge to grant about $25 million globally in 2019 to humanitarian and environmental projects seeking to use artificial intelligence to speed up and grow their efforts.
Reuters pointed out that “Focusing on humanitarian projects could aid Google in recruiting and soothe critics by demonstrating that its interests in machine learning extend beyond its core business and other lucrative areas, such as military work.”
Earlier this year, following a harsh and public employee backlash, Google announced that it would not renew a deal to analyze US military drone footage in a AI-based program.
Google AI Chief Operating Officer Irina Kofman told Reuters the new AI for good program was not a reaction to what happened earlier this year, but noted that thousands of employees are eager to work on “social good” projects despite the fact that those programs often do not directly generate revenue.
Microsoft, on the other hand, just silently announced that it would sell the US military and intelligence agencies whatever advanced technologies they needed “to build a strong defense,” including its machine learning and AI tools.
“We want the people of this country and especially the people who serve this country to know that we at Microsoft have their backs,” wrote Brad Smith, Microsoft President, in a blog post. “They will have access to the best technology that we create. At the same time, we appreciate that technology is creating new ethical and policy issues that the country needs to address in a thoughtful and wise manner. That’s why it’s important that we engage as a company in the public dialogue on these issues.”
The counter the mounting criticism, Microsoft has recently launched a series of new AI programs, totaling $115 million, including AI for Earth, its new project to put AI to work for the future of our planet, and AI for Humanitarian Action, a new $40 million, five-year program to harness the power of AI to focus on four priorities — helping the world recover from disasters, addressing the needs of children, protecting refugees and displaced people, and promoting respect for human rights.
Where to start?
GDPR was one of the most famous — and infamous — acronym of 2018. Everybody in tech, for good of for bad, heard about it, worked on implementing it, or researched its legislative and regulatory implications around the world.
Tweet via the European Commission.
GDPR stands for General Data Protection Regulation, a new policy entered into force in the European Union at the end of May. GDPR had — and still has — a significant effect on how Internet businesses operate in Europe, no matter if they are big or small, located or not, operating within the borders of the EU or not.
WIRED even wrote: “Once mocked, Europe’s new data protection has become a source of transatlantic envy.” “When GDPR was first passed, US commentators dismissed it as a piece of jealous protectionism,” Rowland Manthorpe explains, citing an editorial in The New York Times now calling for similar rules.
“The new European rules are not perfect — they include the so-called right to be forgotten, which allows people to ask companies to delete personal information that they no longer wish to share,” the editorial read. “But the Europeans have made progress toward addressing some of the problems that have recently been highlighted in the United States.”
GDPR rules were even mentioned during many tech hearings on Capitol Hill in Washington DC, including the hearing with Mark Zuckerberg of Facebook.
Via Stefan Becket on Twitter
A photo of Zuckerberg’s note for the hearing even went viral. It shows the notes provided to Zuckerberg by his legal and public policy team, including General Counsel Colin Stretch, Joel Kaplan, Vice President for US Public Policy; Erin Egan, Chief Privacy Officer; Myriah Jordan, public policy director for congressional affairs; Pearl Del Rosario, Associate General Counsel for Compliance; and Brian Rice, director of public policy.
The note includes talking points on a plethora of topics and issues, including Russian interference and election integrity, diversity, competition, and privacy… And even Facebook’s reaction to comments by Apple’s Tim Cook on the company business model and leadership.
In the notes, when it comes to privacy, the new European GDPR regulation is quite prominent.
The notes advise Zuckerberg: “don’t say we already do what GDPR requires.”
Lime electric scooters in Washington DC.
As I wrote in November, I don’t get this love-hate relationship cities and commuters have with dock-less electric scooter and bike sharing startups, like LimeBike, Bird, and now even Lyft and Uber.
And I certainly don’t agree with Elon Musk when, in a Recode podcast with Kara Swisher, says that scooters “lack dignity.”
Dignity?
I’m a biker and I see the benefits these new ways of last-mile transportation can bring to our cities. They help reduce congestion and traffic; they green our environment; they limit our commuting time, and they’re fun. In addition, biking is a very healthy way of moving around, while, at the same time, both bikes and scooters, docked or dock-less, are very affordable.
Of course, there are both pros and cons. Educating scooter users — to use helmets, limit usage for minors, obey traffic laws — and working with cities and local administrations is key to make this new model work for everybody. But it is also important to rethink our cities beyond the industrial revolution grids and schemes that we still live by, with cars still dominating our transportation systems and pedestrians having to adjust to car-dominance. Oftentimes, bikers are not even part of this mobility equation and cities are hard to adjust to new, innovative commuting schemes.
Cities are not good at disrupting systems and move away from the status quo. Cities are not good at operating as startups. But they should.
12 ways to improve mobility in our cities_As a biker, I wish our cities would rethink the transportation grid with citizens — not cars or companies — in mind._medium.com
In April, on the eve of Airbnb’s 10th anniversary, Airbnb Citizen launched the Office of Healthy Tourism, “an initiative to drive local, authentic and sustainable tourism in countries and cities across the globe.”
Along with the launch of the Office, Airbnb also released data that shows the benefits of healthy tourism for hosts, guests and cities around the world, as well as announcing the creation of its new Tourism Advisory Board, which will be made up of travel industry leaders from around the world.
“With travel and tourism growing faster than most of the rest of economy, it is critical that as many people as possible are benefiting — and right now not all tourism is created equal,” said in a statement Chris Lehane, Airbnb’s Head of Global Policy. “To democratize the benefits of travel, Airbnb offers a healthy alternative to the mass travel that has plagued cities for decades. Airbnb supports tourism that is local, authentic, diverse, inclusive and sustainable. Through the meaningful income earned by the mosaic that is our global community of hosts; our ability to promote tourism to places that need it the most; and the inherent sustainable benefits of hosting, Airbnb is providing the type of travel that is best for destinations, residents, and travelers alike.”
A list of the most-talked about in tech published here on Medium has to mention Medium itself.
In October, the publishing platform started by Twitter co-founder Ev Williams published its first book, a satirical novel about tech written by Google former head of communications Jessica Powell.
The book, titled The Big Disruption: A Totally Fictional But Essentially True Silicon Valley Story, is a novel and NOT a Google tell-all. It “scrutinizes Silicon Valley from the perspective of someone who has spent most of her career in the industry,” wrote Eric Johnson on Recode.
The Big Disruption_A Totally Fictional But Essentially True Silicon Valley Story_disruption.medium.com
“On a very personal level, [I have] a feeling like people who I worked with for many years are gonna somehow think that I’ve betrayed them,” Powell told Recode. “But at no point was I ever thinking I was writing a book about them or about their … It’s much broader than that.”
As the The Guardian noted, “The Big Disruption is still a 90,000-word send-up that should make any somewhat-reflective tech worker at least smirk in recognition.”