nathan.ai newsletter: Q4 2017

Written by NathanBenaich | Published 2018/03/03
Tech Story Tags: artificial-intelligence | deep-learning | venture-capital | technology | machine-learning

TLDRvia the TL;DR App

A market intelligence newsletter covering AI in the technology industry, research lab and venture capital market. Subscribe here.

Reporting from 17th October 2017 through 31st December 31th 2017

Hello from London ⛄! I’m Nathan Benaich. It’s been a while since I’ve written you…so I’m excited to bring you issue #23 of my AI newsletter! Here, I’ll synthesise a narrative analysing and linking important news, data, research and startup activity from the AI world. I’m covering Q4 2017 and will follow up with Q1 2018 in the next few days. Grab your hot beverage of choice ☕ and enjoy the read! A few announcements to start:

1. Are you an engineer, researcher or operator working in AI? Are you moving into the intelligent systems space and thinking through your next moves? Hit reply and drop me a line — I love exploratory blue sky thinking brainstorming sessions :) I’m investing as a Venture Partner with Point Nine Capital.

2. I’m running the 4th edition of The Research and Applied AI Summit on June 29th in London. We’ve built a vibrant community of AI practitioners capable of attracting 10 world-class researchers and entrepreneurs who will give talks on the science and applications of AI technology. I’m thrilled to have exceptional speakers this year, many of whom are flying in from the US to be with us. We’ve so far announced Shakir Mohamed (DeepMind), Chris Ré (Stanford), Blake Richards (UToronto) and will be sharing more news over the next few weeks. If you’re a student or researcher in academia, working in engineering/research/product a startup or large technology company or thinking about making the jump between academia and technology companies, be sure to register your interest to attend here.

3. Our upcoming London.AI events are slated for April 19th and September 27th. Event #11 was a big success thanks to OpenMined, Element.AI, Dataiku and GTN.

Referred by a friend? Sign up here. Help share by giving this it a tweet :)

🆕 Technology news, trends and opinions

🚗 Department of Driverless Cars

Eager to keep track of where your city stands on the spectrum of preparing to piloting autonomous vehicles? Have a look at this tracker.

a. Large incumbents

Delphi Automotive, the US-listed British Tier 1 automotive supplier, decided to split into two entities: Delphi Technologies, a spin off that counts its automotive powertrain, advanced propulsion and aftermarket solutions businesses, and Aptiv, which focuses on new mobility solutions, smart vehicle architecture and connected cars to connected cities, including nuTonomy. Aptiv demonstrated its technology with Lyft at CES, resulting in a positively uneventful ride!

Intel Mobileye published a paper suggesting the minimal requirements that every self-driving car must satisfy using four common sense rules, how can we verify these requirements, and design a system that adheres to these safety standards without suffering exploding development cost. Sounds like a reasonable approach for highway driving but more challenging to generalise to urban driving where behaviours are less predictable.

NVIDIA released a smaller form factor chipset based on the Drive PX2 to make the system more practical.

Baidu announced several updates to their Apollo self-driving program in their Q4 2017 earnings report, where it announced a 29% and 25% YoY increase in revenue and R&D spend, respectively. Notably, the Apollo program now has 90 partners and backing from China’s Ministry of Science and Technology as the national open AV platform of choice. In fact, 100 impeccably choreographed Baidu Apollo-equipped cars were the front and center of China’s Spring Festival Gala, which was televised to 800 million viewers! The company plans Level 4 busses by July 2018 and cars by 2021. The second release of their open source project now supports simple urban road conditions. It also introduced four new functions, including cloud services, software platforms, reference hardware platforms and reference vehicle platforms. This strategy is consistent with Baidu playing catch up by aligning with the government, spreading its software and tools far and wide while engaging in many product collaborations.

Tencent is said to be joining the autonomous vehicle market, having developed their own system according to Bloomberg. The company is a shareholder in Tesla, Didi Chuxing and NavInfo (a mapmaker), as well as an investor in other mobility companies like Uber, Lilium Aviation, GO-JEK and WM Motor (a new car company). It’s therefore not out of the question for them to learn from these investments and develop their own markets, much how Google and others have.

Waymo are essentially ready to go with their autonomous Level 4 fleet, especially within defined routes. They have a neat promo video and are opening the service members of the public in the next few months. Interestingly, Waymo published a 43 page Safety Report in which they describe their Safety by Design principles that build on best practices from aerospace, automotive and defense. It covers five safety areas: behavioural, functional, crash, operational and non-collision safeties and how they are tested and backup systems provisioned. The report also presents Waymo’s approach to vehicle data security and simulation. It’s an impressive piece of work that is really worth a read.

Renaud released a demo of their autonomous vehicle dodging a few cones at 27 mph.

Volvo agreed to supply Uber with a fleet of 24,000 vehicles equipped with self-driving systems. That’s the same number of cars than there are black cabs in the London. This order is estimated at around $1B in value, therefore accounting to 4.5% of Volvo’s 2016 sales numbers. Uber hit 1 million self-driving miles on the road in September 2017 (after 2.5 years) and another million was logged 100 days later. Today they’re tracking on 84,000 miles a week on the road. Meanwhile, Volvo later pushed back it’s ambitious target of having 100 autonomous cars on the roads in Sweden by 2017 towards 2021!

Microsoft have open sourced their own plugin atop the Unreal Engine that serves as a high-fidelity system for testing the safety of AI systems, including autonomous vehicles.

In a collaboration between Intel, Toyota Research Institute and the computer vision center in Barcelona, researchers related an open source simulator for autonomous driving research called CARLA. It comes with Comes with free digital assets: urban layouts, buildings, and vehicles.

A piece in the Atlantic argues that self-driving car rides could grow up to be free, where the business model for operators is real-world advertising for businesses wishing to showcase real estate, new shops or service on-demand pickup of products/services along a rider’s route. Interesting concept!

b. Startups

Argo is diving deeper into the self-driving stack by acquiring Princeton Lightwave, a team of LiDAR hardware and software engineers. The aim is to develop and own new sensors in-house, much in the way Waymo has done.

Navya, the French autonomous electric vehicle company, ran a two week experiment in Las Vegas with their autonomous shuttle. The system also uses a vehicle-to-infrastructure technology that reads sensor data embedded in Las Vegas’ traffic signals to better manage traffic flow. This is not dissimilar to the approach used by Lyft/Aptiv at CES and suggests that city-scale IoT infrastructure updates are a way to reduce the complexity of autonomous mobility.

AEye, a US startup, launched a new hybrid sensor for autonomous agents that combines a low-light camera, solid-state LiDAR and onboard image processing that can be reprogrammed on the fly. The device scans certain areas of the scene at low resolution and others at higher resolution depending on the priority of the car’s control software. In this way, it sounds not dissimilar to how humans process higher level interpretations of visual scenes. However, the visual field is 70 degrees, thus requiring several devices per vehicle.

💪 The giants

Spotify filed its S-1 to complete its direct listing on the NYSE, offering registered shareholders to sell $1B of shares at $19.7B valuation. AI is a core part of the company’s strategy to both grow and differentiate its service from those run by Google, Apple and Amazon. Specifically, Spotify uses recommender systems to personalise the music discovery experience for each user, which include the Discover Weekly, Daily Mix, or Release Radar features. According to the filing, the discovery engine “now programs approximately 31% of all listening on Spotify across these and other playlists, compared to less than 20% two years ago.” What’s more, the system uncovers “hidden gems” for listeners. This helps unknown artists monetise on the platform, give them a chance to build a following and recruit listener feedback to improve the system’s recommendations. In fact, over 150 billion user interactions are logged on a daily basis, which amounts to a dataset of 200 petabytes in size (vs. 60 petabytes in Netflix).

Dropbox also filed their S-1, citing that machine learning ”further improves the user experience by enabling more intelligent search and better organization and utility of information. This ongoing innovation broadens the value of our platform and deepens user engagement.” Their revenue growth is strong and migration to owned infrastructure has allowed slightly declining operating expenses to grow margins.

Snap caused a racket by updating their Snapchat client by implementing algorithmically a personalised Stories feed, which “leverages the tremendous benefits of machine learning without compromising the editorial integrity of the Stories platform”.

Amazon introduced SageMaker at re:invent 2017, a fully managed end-to-end machine learning service that enables users to build, train and host ML models at scale. As such, you can rent your ML infrastructure from AWS now :)

Apple have started to expose software from their Turi acquisition, starting with Turi Create. The library simplifies the development of custom machine learning models for recommendations, object detection, image classification, similarity and activity detection. The system outputs models to Apple’s Core ML for cross-platform use across the Apple ecosystem. This is a move to hold app developers in the walled orchard.

🍪 Hardware

As hardware substrates to run machine learning models grow in numbers, compute platforms diversify and open source frameworks remain in constant development, there is a need to create unified approach to allow frameworks to run optimally cross-platform without manual optimisation. To this end, several projects have been released: a) Intel published nGraph (following from Nervana’s Neon), b) the University of Washington and the AWS AI team published the NNVM compiler to compile front-end framework workloads directly to hardware backends, and c) a consortium of tech companies led by Facebook, AWS and Microsoft created ONNX. It’s yet unclear how popular these tools are in production environments, but it’s clear that programming flexibility is key. More on framework competition in 2017 here.

Amazon is reportedly developing a custom chipset for their Alexa devices to minimise cloud-based AI workloads, ranging from wake words (e.g. how Siri uses deep learning) to task-based dialogue. Alexa’s popularity with consumers grow is growing, thereby driving traffic through Amazon (vs. Google) increases, driving further habit formation and query volumes. Owning the chipset to control the product experience may very well be worth the $100M or so investment (similar to what private chipset companies are raising currently).

Intel also reported the upcoming release of their new Nervana chip, which they claim to perform better because they use less precise computation (16 bits floating points instead of 32) and expand the memory bandwidth. However, I haven’t seen power performance comparisons with NVIDIA chips. The company is also working with AMD on a graphics chips for laptops.

Graphcore published preliminary benchmarks exhibiting significant simulated training speedups over GPUs.

Always wanted a Boston Dynamics pet doggo (that now has extra tricks)? Well, this Chinese startup will sell a quasi-replica for $20k.

Broadcom are making a play to own AI assets buy issuing a $142B hostile takeover bid for Qualcomm. What’s interesting here is that Qualcomm recently secured a $44B acquisition of NXP, the Dutch chipmaker that supplies German passports. As such, there might be data protection concerns around a Singapore-based company owning a European company that handles EU citizen data.

Tesla confirmed that Jim Keller, chip architect (formerly of AMD and Apple fame), is developing in-house silicon optimised for running their self-driving software suite. Few details are out there.

🎮 Games

Simulation remains an area of active research and development for training AI systems. Self-play, which involves having AI agents compete against themselves to solve a task in some kind of virtual world, is a popular strategy for improving performance on that task. AlphaGo is a great example here. The more exciting aspect of self-play in my view is that of agents discovering hitherto unknown strategies to solve a task. OpenAI show this with their work on competitive self-play in sumo wrestling environments. The challenge is of course overfitting and designing reward functions against which agents optimise their strategies. Make sure it’s the right one! On the topic of OpenAI, Elon Musk has departed the board due to the “potential” conflicts of interest arising from Tesla’s focus on AI. Seems a bit late in the game given that Tesla has already hired serious talent from OpenAI….

David Silver and Julian Schrittwieser of DeepMind’s AlphaGo team led a Reddit AMA. The discussion included why the tree search approach allows more stable training using reinforcement learning and that algorithmic improvements with AlphaGo Zero led to significant improvements to the agent’s learning efficiency. In fact, AlphaGo Zero represented Go knowledge using deep CNNs (i.e. the environment and the game’s rules), trained solely by reinforcement learning from games of self-play with no human examples (paper here). Unfortunately, the team do not plan on open sourcing the entire project because it is “a prohibitively intricate codebase”. Next, the team applied a similar but fully general purpose reinforcement learning algorithm, AlphaZero, to the games of chess, shogi and Go (paper here). AlphaZero replaces the handcrafted knowledge and domain specific augmentations used in traditional game-playing programs with deep neural networks and a general, tabula rasa reinforcement learning algorithm. What’s interesting too is that AlphaZero estimates and optimises the expected outcome, taking account of outcomes other than win/lose/draw. This means that it doesn’t pick the strategy that wins at all costs, but can design a strategy to win by a certain margin.

🏥 Healthcare

Late last year, the NIH Clinical Center released a dataset of >100,000 anonymized chest x-ray images representing 30,000 patients with eight common thoracic diseases (such as pneumonia, nodules, infiltration…). The purpose was to spur innovation in software capable of classifying these diseases from chest x-rays alone, a procedure conducted 2 billion times a year. Rather quickly, Andrew Ng’s group at Stanford published a study claiming to detect pneumonia from chest X-rays at a level exceeding practicing radiologists and prior research. What’s more, the authors produce heatmaps to visualize the areas of the image most indicative of the disease using class activation mappings. While this paper drew lots of attention, radiologist and ML researcher Luke Oakden-Rayner identified serious issues with the dataset itself, and thus the conclusions drawn by the study. In fact, evaluation of the images in the dataset itself shows that the disease labels don’t always reflect the condition represented by the x-ray (mostly likely because the labels are mined automatically from the diagnosis). What’s more, some disease labels cannot be medically differentiated on an x-ray alone, thereby negating the clinical relevance of the software’s ability to do so. While the model performance is high on the test set, the predictions are medically useless. Given that 100% unambiguous ground truth labels for a medical condition are almost impossible, we need to impose extra rigour in machine learning-based systems for clinical diagnostics.

Cardiogram presented early clinical results showing that their DeepHeart system could recognise hypertension and sleep apnea from a wearable heart rate sensor with 82% and 90% accuracy, respectively. Importantly, 1.1 billion people suffer from hypertension and 20% remain undiagnosed. Even worse, 80% of people with diagnosable sleep apnea don’t realise they have it! Cardiogram use their Apple Watch app to collect 30 billion sensor measurements from 6,115 participants over 90 days. Impressively, they show 54% engagement retention at 90 days too, demonstrating that beautiful product can be used to deliver substantial value extracted using AI techniques. I really like this business.

Google published research on a tool called DeepVariant, which automatically identifies insertions/deletion mutations and single base pair mutations from high-throughput DNA sequencing data. This is one of the core tasks in bioinformatics pipeline for processing sequencing data. The model took first place in the PrecisionFDA Truth Challenge measuring accurate sequencing data.

🎨 Creativity and designing AI systems

We already know that generative models can create realistic-looking art. New work now shows (in a small sample) that we believe machine-generated art is more creative than that’s on show at Art Basel, a major show. I’m excite to see how creators use machine-assisted generative design to expand their creative outputs.

Switching to music, the world’s first concert co-written by human composers and machines is soon to be held in Korea (K-Pop of course!). The machine, in this case is Jukedeck’s AI software, which has written notes on its own accord.

The playbook on designing intelligent systems is still being written. Adding to the bookshelf is the Intelligence Augmentation Design Toolkit. It provides tools to structure your thinking when designing a service that includes ML elements, namely learning loops.

🎓 Revisiting (old) ideas

Deep learning has been all the range to solve complex optimisation problems in the last few years, in large part thanks to backpropagation algorithm that tunes a models weights. This process allows for efficient search over the parameter space to find a solution that’s good enough for the network to solve challenge tasks like speech recognition. However, using backpropagation to train agents using reinforcement learning suffers from the problem of receiving feedback multiple timesteps in the future for actions taken now. Another approach that the community is revisiting is Evolution Strategies (ES). In the words of a piece by David Ha, “evolution strategies can be defined as an algorithm that provides the user a set of candidate solutions to evaluate a problem. The evaluation is based on an objective function that takes a given solution and returns a single fitness value. Based on the fitness results of the current solutions, the algorithm will then produce the next generation of candidate solutions that is more likely to produce even better results than the current generation. The iterative process will stop once the best known solution is satisfactory for the user.” While less data efficient and still computationally intensive, ES computation is easier to distribute for parallel computation and the policies (behaviours) discovered through ES are more diverse than through RL. Furthermore, new work shows that agents trained through simple ES algorithms exhibit the same or better performance on Atari games than RL agents.

📖 Policy

New work has emerged from the AI safety world. OpenAI researchers penned an op-ed arguing, like many others and rightly so, that AI systems must be bound to the interests of their human developers. There are ample examples (here) where crafting the right reward function for an AI system to optimise is key to aligning its behavior with our own interests. The question is how we’re to design an AI system that can effectively learn any task in any environment, and perhaps even inspiring its own approach to learning, all the while encoding what we believe is morally sound behaviour? Furthermore, the Future of Humanity Institute and OpenAI published work on forecasting, preventing and mitigating the malicious use of AI. The report focuses on cyber attacks, autonomous systems, information manipulation and more, while imploring research openness, a culture of responsibility and legislative measures to preserve the public good.

Along these lines of safety, the team at DeepMind propose a framework to formalise and operationalise the notion of algorithmic fairness. According to their paper, the system takes a fair decision by correcting the variables that are descendants of the sensitive dataset attribute through unfair pathways (decision making logic that yields an unfair outcome). The correction aims at eliminating the unfair information contained in the descendants, induced by the sensitive attribute, while retaining the remaining fair information. This approach more naturally achieves path specific counterfactual fairness without completely disregarding information from the problematic descendants. Their method uses deep learning and approximate inference to generalise to complex, non-linear scenarios. DeepMind have also formed an Ethics & Society team to scrutinise the societal impacts of the technologies it creates.

🌎 Nation states

The topic of China’s role as a global contender for AI development is one that comes up more often that it used to in the conversations I’m having with the community. In the last edition, we talked about China’s plan to become a powerhouse by 2030 after matching the West in only 3 years from now. China has a ruthlessly entrepreneurial drive, deep capital sources, a highly technical talent pool (many of whom have earned advanced degrees abroad), and a totally different notion of data privacy compared to the West. Together, this means that we should expect major commercial and research breakthroughs from China. In particular, I think China is well positioned to innovate in sectors that come under heavy regulatory scrutiny in the US: healthcare, security and mobility. For example, Yitu Technologies markets a facial recognition system that matches individuals against a national database of 2 billion faces (and they’re not the only ones selling facial recognition in China). What’s more, a new city 10x the size of London is being built from the ground-up for an autonomous mobility future. I also expect China to push through advances in AI-based medical diagnostics that have demonstrated efficacy today in the experimental context. It’s not just the government pushing forward; Alibaba, for example, has committed to invest a whopping $15B in R&D over three years across seven new research labs (US, China, Russia, Israel, Singapore).

Canada’s rise in AI: The Canadian Institute for Advanced Research (CIFAR) was founded in 1982 with the mission to be a “university without walls”. Funded by the Canadian government, CIFAR has groups such as the Learning in Machines and Brains (set up by Hinton in 2003) that funds researchers from all over the world and irrespective of where they work. This idea of disseminating financing in a cross-board fashion is powerful; in fact, the AI Grant program by Daniel Gross and Nat Friedman expands on this idea by sharing grants through the web. The challenge is now to use this national momentum to create meaningful companies.

👫 Corporate-National tie ups

DeepMind doubled down on their Canadian exposure by setting up a lab in Montreal, in close collaboration with McGill University. The group is led by Doina Precup, whose work focuses on RL.

Facebook committed further grant support to CIFAR’s Learning in Machines & Brains program at McGill. The company also launched a one-year residency program not dissimilar to the Brain residency.

Both Facebook and Google have increased their capital commitment for AI research in Paris. FAIR Paris will grow to 100 staff by 2022 and Google will add another 300 employees to its French office, reaching 1000 staff.

Amazon opened a research facility in collaboration with the Max Planck Institute for Intelligent Systems in Tübingen, Germany. The group will grow to 100 researchers and leverages the Institute’s core competencies in vision and neuroscience.

Google is opening an AI center in China, their first in Asia, to be led by Fei Fei Li and Jia Li from Google Cloud AI.

🔬 Research

Here’s a selection of impactful work that caught my eye:

A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs, Vicarious AI. In this paper, the research team at Vicarious describe the Recursive Cortical Network (RCN), a generative, object-based model that assumes prior structure about the problem in order to facilitate model building. This differs from deep learning trained through backpropagation, which assumes that the model must be built entirely from scratch as it consumes data with no priors about the world. The authors focus on the CAPTCHA solving problem, and therefore their RCN builds in a lot of domain knowledge into its architecture that allow it to separate shape and surface. Their full generative model also means that it can be used for many different problems: classification, occlusion, inpainting, generation and more. Moreover, with this type of knowledge scaffolding in place, learning and inference become far easier (as one would expect). This results in 300x more data efficient learning in the case of scene text recognition. However, the model requires extremely clean training data: clear letters without any background in the same font as in the CAPTCHA. The generative model used to simulate 3D images also fails to generalise because the object in the test image must be of the exact same shape as that in the training image. Thus, the intricate pipeline presented here is extremely tailored to the CAPTCHA problem. Thanks to Julien Cornebise for sharing his feedback on this paper!

Progressive neural architecture search, Google Brain, Cloud and AI. In this paper, the authors expand on Google’s investments in AutoML to discover neural network architectures automatically, thereby abrogating the manual design process. Prior approaches to this problem have used RL or genetic algorithms, the former proposes designs and scores them +/-, the latter iteratively evolves changes and evaluates them sequentially. Here, they use a sequential model-based optimisation (SMBO) strategy instead. They do not search for a complete CNN, but instead search for cells (smaller units of the CNN itself), and do so in a structured search space. THe process starts with low complexity architectures and proceeds to more complex ones while learning a function that guides the search. The result on CIFAR-10 is a network that has the same classification accuracy as the network designed by RL, but it is achieved after evaluating 2x fewer candidate models. SMBO also outperforms genetic algorithms to the tune of 5x fewer candidate model evaluations. The SMBO-learned model on CIFAR-10 also transfers onto ImageNet, matching state-of-the-art top-1 and top-5 accuracy. While this work shows that machine-generated networks can perform as well human-designed networks for image classification, the real question is whether AutoML techniques can discover drastically novel networks that significantly outperform human-designed networks. I’m also interested to see how AutoML works on problems where the optimal network design is not known.

Other highlights:

  • Backing off towards simplicity — why baselines need more love, Stephen Merity. An important argument that without baselines against which to measure model performance, it’s impossible to chart progress over time.
  • On the topic of measuring progress, a new challenge was launched at ICLR 2018 that focuses on reproducibility of research. Here, an audience of machine learning students is tasked with selecting a submitted paper and aim to replicate the experiments and the conclusions described in the study. Super important initiative because “verifiable knowledge is the foundation of science”.
  • Another initiative towards reproducibility was shared in a paper entitled DLPaper2Code: Auto-generation of code from deep learning research papers by IIIT Delhi and IBM. Here they show that you can take a research paper, extract figures and tables, classify whether it includes a DL model and what kind of figure it is, extract flow information using OCR and NLP, then cuilt an abstract computational graph that can be compiled into Keras or Caffe code!
  • High-resolution image synthesis and semantic manipulation with conditional GANs by NVIDIA and UC Berkeley. This study is an attempt to synthesis 2048x1024px images interactively using a graphical user interface where the user paints a semantic map of a scene and a GAN synthesises the matching real-looking image component.
  • StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation, Korea Uni and HKU. This paper presents an approach to learn one model that translates between multiple domains without supervision whereas prior iterations could only translate between two domains without supervision. For example, take a headshot and change several aspects of the image in one go (e.g. hair color, pose, smile, etc).
  • Moments in Time Dataset: one million videos for event understanding, MIT, IBM, Brown. In this research, the authors construct a database of 3 second video clips of events that include people, objects, animals and natural phenomena. Each clip is procured from YouTube and other online video libraries, and annotated with one of 339 different action or activity class labels (e.g. closing, carrying, flying). However, these labels and predictions generated by their models are rather high level concepts that lack granularity for real world use cases just yet. The dataset adds to the collection of TwentyBN Something Something, DeepMind Kinetics, and Google AVA.
  • Generative Adversarial Networks: An Overview, MILA and Imperial College London. A much needed overview of the GAN zoo!
  • Progressive growing of GANs for improved quality, stability and variation, NVIDIA. The authors present a new way of training generative adversarial networks to output high resolution (1024x1024 face examples). Their key insight is to grow both the generator and discriminator progressively, starting from easier low-resolution images, and add new layers that introduce higher-resolution details as the training progresses. This greatly speeds up training and improves stability in high resolutions: example video here.
  • NIPS 2017 reviews: day 1, day 2, days 3+4, and days 5+6, kindly created by Cody Marie Wild. If that’s not sufficient, here’s a 43 extensive NIPS narrative!
  • Learning with privacy at scale, Apple. Here, a system architecture is presented to learn the frequencies of elements used by a user (e.g. emojis or web domains) while leveraging local differential privacy to safeguard user privacy. Data is privatised on the user’s device using event-level differential privacy in a local model and server communications occur over an encrypted channel once a day without device identifiers.

📑 Resources

Kaggle conducted an industry-wide survey for salaries, education, job sources, job tasks and more in data science and machine learning. Definitely worth a read for those who are hiring!

How to get started using the popular Robot Operating System (ROS) for training self-driving cars.

Jeff Dean gave a talk at NIPS 2017, Machine learning for systems and systems for machine learning in which he gives concrete examples for AutoML and the many opportunities for ML in replacing heuristics.

Seb Ruder shared a post exploring the the deficiencies of word embeddings, a central component of natural language processing models, and how recent approaches have tried to resolve them.

Here’s a compilation of graphs that show the progress in performance on various ML tasks — useful for presentations you might need to create!

More graphs about the state of the AI industry in 2017, including publications, university course enrollment, startups and open questions.

How Spotify uses collaborative filtering, NLP and acoustic models to power their music recommendation technology.

Every chapter of the famous Deep Learning book written by Goodfellow, Bengio and Courville, is now available in video walk through format.

After announcing their internal ML platform called Michelangelo, Uber’s engineering team has gone on to open source a large-scale distributed deep learning training framework on TensorFlow, called Horovod. They found that training on 128 GPUs with the standard distributed TensorFlow package left up to 50% of their resources unused. Horovod implements and expands on Baidu’s ring-allreduce algorithm for updating model gradients without a parameter server to store them. Uber AI labs has also open sourced Pyro, an open source probabilistic programming language that connects deep learning with Bayesian modelling.

Is Bayesian deep learning the most brilliant thing ever? (video) — a star-studded panel at NIPS discussing the intersection between Bayesian nonparametrics and deep learning.

AI and deep learning in 2017 — a superb review by Denny Britz!

The impossibility of intelligence explosion — a piece by François Chollet arguing that intelligence is not about the cognitive abilities of the brain on its own, but how it ties in with the broader system of your body and environment.

💰 Venture capital deals

319 deals (60% US, 20% EU, 13% Asia) totalling $3.9B (56% US, 7% EU, 31% Asia). Note a significant uptick (2.5x) in deal value largely driven by Asia activity compared to the trailing 3 month period.

China’s facial recognition company Face++ raised a $460M Series C led by China State-Owned Assets Venture Investment Fund. This is a led to a 10x growth in total capital raised for the 7 year old company.

Embodied Intelligence raised a $7M Seed round led by Amplify Partners in SF. The company is founded and led by robotics and deep learning rockstar Pieter Abbeel.

Graphcore raised a $50M Series C financing led by Sequoia Capital, adding to the $60M already raised in the company’s Series A and B rounds.

Bay Labs raised a $5.5M Series A financing led by Khosla Ventures, along with DCVC and others. The company applies deep learning to cardiovascular imaging.

LabGenius raised a $3.6M Seed round led by Kindred Capital and Acequia for their AI-driven protein engineering software. I’m excited to have partnered James and the team as an angel investor.

28 exits, including:

Jianpu Technology, which operates a mobile-based consumer lending products since 2011 in China, raised $220M on the NYSE at a valuation of $1.31B. The company employees over 600 FTE and was privately backed by Sequoia China and Lightspeed China, amongst others. The company now trades at

Argus Security was acquired by Continental for $400M. The two businesses had been working together to protect the internal communications of vehicles with network-connected features. Argus had also developed a solution for secure over-the-air vehicle software updates with Continental subsidiary Elektrobit. Founded in 2013, Argus has raised $30 million, including $26 million two years ago from Magna International (a Tier 1 OEM), Allianz (a large insurer), SBI Group and Israeli venture capital funds Magma and Vertex.

Apple acquired Shazam, the music recognition company, for a reported $400M — a far cry from the private company’s $1B paper valuation. Shazam launched in 2002 and rode the transition from feature phones to smartphones, ultimately reaching over a billion downloads, $54M in revenue and $5M in losses in 2016. Presumably Apple’s interest is to improve it’s own Siri music recognition system and also expand its view of listener’s preferences and emerging artists that could be pulled into its Apple Music offering.

— -

Congrats for making it to the end, here’s some comic relief :)

Anything else catch your eye? Just hit reply!


Published by HackerNoon on 2018/03/03