Before you go, check out these stories!

0
Hackernoon logoOur Quest To Solve Inefficient Web Traffic Monetization by@pavel-shkliaev

Our Quest To Solve Inefficient Web Traffic Monetization

Author profile picture

@pavel-shkliaevPavel Shkliaev

I grew up on such books as «Roadside Picnic» by the Strugatsky Brothers and «The Inhabited Island»

Have you heard or perhaps even tried new ways to purchase stuff you see on TV? You know, the ones that invite you to buy things you see while your favorite show is being aired? They offer you to shop through various user interaction mechanics that range from scanning a QR code shown at the corner of the TV screen to pressing a set of navigation buttons on the remote control to receive a text message with a link to the product. What a maze, I have to say.

Come on, people!

Do you really think the user can enjoy such an effortful shopping experience? I would change my mind to purchase something a number of times before completing the "hell of navigations" like this...

Let me spill the tea here. We are often compared to such products, and when it happens, it devastates us.

We Are Different

If you missed the previous articles about our technology, I recommend you familiarize yourself with what we've been working on and how it all started before reading this one (it will make more sense to you then).

  • How We Taught Artificial Intelligence to Sell
  • Where Can Dreams Take You? The Art of Teaching Associative Thinking to Machines
  • Online Advertising in The Time of The Consumer Attention Deficit Hyperactivity Syndrome (ADHD)
  • Different by Concept

    Inspired by computer games, cyberpunk, and HUD interfaces, we imagined a wholly digitalized and full of AR-based interactions world where every object would allow you to get information about itself and there were no boundaries to purchase it. That's how we started to develop our internally called "seamless sale" concept that would create an opportunity for the user to buy goods directly from the open-world when interacting with the object or viewing the content where the object was. No visit to the store's website or another place is necessary. Effortless! 

    For us, LensAI, the fundamental quest was to solve inefficient web traffic monetization by installing our ads in any graphic and video content regardless of its release date. In "techy" words, we had to train the machine not only to detect all types of objects in the content (that vary from cats and dogs to a spaceship) but detect these objects instantly. To execute it instantly was essential criteria as we wanted the machine to show only relevant to the content ads at the very same moment when the user is viewing that content. 

    To solve the before-mentioned problems, we first had to become conceptually different by technology, and everything else was second. Our conceptual difference is not recognizing objects and then searching for similar ones, but the machine's ability to begin to understand the cumulative meaning of all three types of content: visual, textual, and audio.

    Feel free to explore more details on the challenges we faced in the article Where Can Dreams Take You? The Art of Teaching Associative Thinking to Machines.

    Different by Technology

    The foundation of the world's digitalization lies in the classification of all possible entities, objects, events, actions, and relationships between all of them. It reminds me of creating a game engine where particle interaction (physics) is replaced with a comprehensive advertising graph with a detailed description of how every entity relates to and influences each other.

    When creating our relevance model of advertising, we combined various taxonomies (taxonomy "is the practice and science of classification of things or concepts, including the principles that underlie such classification"). [1] In the model, our AI detects every entity and how entities influence each other.

    This approach gave us a clear understanding of how everything found in the image correlates to its context.

    The understanding we’ve gained opens up almost endless possibilities for both displaying ads and accomplishing more specific tasks.

    Different by Capabilities

    Our technological differences get us the access to previously inaccessible opportunities.

    Advertising Selection for Any Object

    Case #1:

    The advertising goal is to sell products from the eCommerce catalog by selecting the fittest ones for ad display from 5,500 advertising categories from Google Products and advertising feeds/APIs of top US marketplaces.

    Tasks:

    - Detect objects in images or video frames.
    - Determine which of the objects are suitable for display advertising.
    - First, for each object that was determined as suitable for display advertising, select relevant advertising e-commerce categories, marketplaces, and then specific products (that range from anything like pet goods to aircraft parts).
    - Lastly, select the fittest products for ad impressions based on:
           -- the product availabilities, the rotation of impressions, and other parameters that affect the decision to display a particular product,
           -- the context analysis,
           -- the similarity analysis.

    Product Display Ads: Cross-Sell & Upsell

    Case #2 The advertising goal is to sell products from the eCommerce catalog.

    Tasks:

    - Detect objects in images or video frames.
    - Determine which of the objects are suitable for display advertising.
    - Select the fittest eCommerce advertising creatives to render on a user's screen depending on the product availability, rotation of ad impressions, and other parameters that influence the decision of ads being fetched.
    

    Relevant Ad Selection Regardless of The Initial Content Release Date
    Case #3 The advertising goal is to sell products from the eCommerce catalog.

    Tasks:

    - Detect objects in images or video frames.
    - Determine which of the objects are suitable for display advertising.
    - Select the fittest eCommerce advertising creatives based on associative thinking algorithms.
    

    Associative Display Advertising Selection

    Case #4 The advertising goal is to boost brand awareness, traffic, and conversions.

    Tasks:

    - Detect objects in images on the site.
    - Determine which of the objects are associatively suitable to use for displaying advertisements of various plans for the internet, cable TV, or telephone.
    - Select the fittest advertising creatives among others provided by an advertiser (there are many of such creatives).
    - Embed advertising creatives.
    

    New Opportunities for Advertisers

    New Opportunities for Publishers

    Different by Ad Format: Visualization & User Interaction

    At the early "birth-of-an-idea" stage, we dreamt of AR smart glasses. We were inspired by interaction with the outside world and the instant "getting-to-know-it" process: the possibility to obtain all information about the object as soon as we focus our attention on it.

    The cost of consumer attention rises every day. Rapid embedding of our ad formats straight into the graphic/video content that had caught the user's eyes seemed to be the most promising. Alongside, we enlivened the picture with animation, added bindings, and tried to cause no significant interruptions to the main content. 

    Initially, we craved to implement a spatial positioning algorithm that would consider objects' perspective in the frame to make our ads shift relative to other objects while the video was being watched. The old geek wisdom saved me from troubles (at least for now): "Do not develop new functions until you have not sold the old ones."

    Having developed a completely new #MoneyDot format, we endowed it with great flexibility and settings for various uses based on advertising goals and advertising creatives currently available on the market.

    #MoneyDot Ad Formats

    Our ad formats support the following advertising goals:

    • Brand Awareness
    • Traffic 
    • Mobile App Downloads
    • Video Views
    • Conversions & Message Displays
    • eCommerce Catalog Sales

    We work with Publishers according to the following scenario:

    1. Publishers install LensAI scripts to their websites. The installation process is similar to Google AdSense.
    2. Publishers may configure or use default settings.
    3. Scripts extract visual content once user traffic is recorded.
    4. LensAI servers analyze the content of the web pages, identify images and videos, and detect objects in them (e.g., a dress, chairs, a microwave, etc.)
    5. LensAI algorithms find matching product categories for the detected objects and then products themselves.
    6. A user comes to the website.
    7. (*Optional Step) The LensAI scripts that were pre-installed (see #1) look up the user's interests through cookies.
    8. Based on that particular user's interests, content and context analysis, ads of the most relevant products are embedded into the objects of visual (video and image) content.
    9. LensAI receives a commission from marketplaces/advertisers for ad impressions and purchases made by users on Publishing Partners' websites.
    10. LensAI shares its earned commission with its Publishing Partners.

    The Cases

    Demonstration of Technology on The Verizon FIOS TV Platform (In-Video Ads)

    For this demonstration, the following tasks were set:

      - Show advertisements for various Verizon's services, including the Internet, cable TV, and phone.
      - Show other advertisements for products found in video content.

    The demo shows the performance of our advertising format for both Verizon plans and products available from various U.S. marketplaces.

    Desktop version

    Mobile version

    Demonstration of Technology on The TechCrunch Platform (In-Image Ads on Website)

    For this demonstration, the following tasks were set:

    - Show advertisements for various Verizon's services, including the Internet, cable TV, and phone. 
    - Show advertisements for other products found in the content.
    

    The demo shows the performance of our advertising format for Verizon plans only.

    The demo shows the performance of our advertising format for Verizon and other advertisers.

    Demonstration of Technology on The AOL Platform (In-Video Ads on Website)

    For this demonstration, the following tasks were set:

    - Show advertisements for various Verizon's services, including the Internet, cable TV, and phone. 
    - Media advertising for other advertisers.
    - Show advertisements for other products found in the content.
    

    Desktop version

    Mobile version

    All advertising goals and various display advertising options can be combined and customized based on the specification

    Brief Information About Settings

    Our ad format has a flexible system of settings that enable publishers to fine-tune their advertisements.

    1. Platforms

    • Mobile
    • Desktop

    2. Ad Placement Area.  The setting enables publishers to choose if they want to embed ads all over the entire site, on an individual page only, or even in a particular area of that page. There are two ways of how ad slots can be created:

    • Ad slots that are generated by our system automatically.
    • Ad slots that are hand-picked by publishers.

    3. Content for Ad Placement. For ad placements, we use visual (graphic) content that our system detects on sites. After content analysis, AI selects the most relevant ads to show in it. The following formats are supported to use for embedding our ads:

    • Images: jpeg, jpg, png, webp, gif
    • Videos

    4. Context for Ad Placement (IAB Standard). Along with graphic content, our system analyzes any accompanying textual content. This additional analysis is performed to help AI to make the rightest selection for ad display. 

    5. Placement Position. The setting enables publishers to choose the position of content (in which ads will be embedded) on the site.  It relates to visibility/significance. Here is a list of ad placement positions to choose from:

    • above the fold,
    • below the fold,
    • header,
    • footer.

    6. Ad Placement Density. The setting enables publishers to choose the frequency of ads to be embedded:

    • For images, publishers can set the frequency of ad displays per page.
    • For videos,  publishers can set the frequency of ad displays per video stream.

    7. The Number of Ad Placements. The setting enables publishers to set the number for simultaneous ad placements in the content.

    • For images,  publishers can set the number of ads to embed per one image.
    • For videos,  publishers can set the number of ads to embed per one video frame.

    8. The Order for Ad Placements. The setting sets the rules and describes specific placement parameters for advertisements on the platform as a whole, its part, or its content. 

    9. Ad Placement Categorization. The setting specifies what categories of Google Product Taxonomy as well as brands can be advertised and what advertising purposes can be executed on the platform as a whole, its part, or its content. 

    • Advertising Purposes
    • Categories
    • Brands

    10. Ad Placement Classes. The setting enables publishers to choose specific classes of objects among detected ones to use for ad display on the platform as a whole, its part, or its content. 

    11. User Data Parameters. The setting allows publishers to choose the characteristics of users to target for ad displays. 

    • Browser Data
    • Device
    • User Data (IAB):Interests &Purchases

    12. Types of Ad Format. We developed various ad formats to be placed straight into the graphic content. The setting allows publishers to choose which ad formats to use for display. Here is a list of currently available ones:

    • #MoneyDot
    • #MoneyBox
    • #MoneyLens
    • Abar classic
    • Vbar

    13. Display Methods. The setting allows publishers to choose various ways to display ads:

    • Animation
    • Elements
    • Interaction
    • Dimensions
    • Display Logic

    14. Other Settings

    Is there a need to implement more "meanings" into technology?

    Is there a need to implement more "meanings" into technology?

    If you read to the end of this article, you are a rock, and we share the same blood! :) Join us, invest in us, do business with us.







    Tags

    Join Hacker Noon

    Create your free account to unlock your custom reading experience.