paint-brush
How To Process Engineering Drawings With AIby@olegkokorin
1,636 reads
1,636 reads

How To Process Engineering Drawings With AI

by Oleg KokorinAugust 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Engineering drawings are complex unstructured documents, making them difficult to process with traditional means of handling digital documents. AI, on the other hand, promises quick and accurate data extraction, especially ready-made AI tools seemingly focused on processing engineering drawings. In practice, however, things are not as good as they seem: engineering drawings present a significant challenge to pre-made AI systems due to their unstructured nature. In this article I share how AI can be used to create a truly functional engineering drawing processing system with high accuracy.
featured image - How To Process Engineering Drawings With AI
Oleg Kokorin HackerNoon profile picture

Having worked on multiple technical drawing processing projects, it was a matter of time till an engineering drawing automation project came our way. What’s so special about engineering drawings, you ask?


Geometric dimensioning and tolerancing annotations (GD&T) is your answer. These pesky labels often present a challenge when processing and extracting data from engineering drawings due to their position on a page and overall structure. But don’t fret — I’m here to share just how we managed to process GD&T annotations on engineering drawings with AI. Let’s start from the beginning though.

Processing Unstructured Documents

All digital documents can be separated into 2 types: structured and unstructured:


  • Structured documents follow a predefined structure, making them easy to process and analyze with AI. Documents like forms, invoices, receipts, surveys, and contracts are all examples of structured documents.


  • On the contrary, unstructured documents lack a consistent organization, making them inherently challenging to process automatically. Examples of unstructured documents include newspapers, research papers, and business reports.


As you might have guessed, technical drawings are a classic example of an unstructured document: despite adhering to a strict set of standards, each drawing is different from another as they lack rigid structure. Coupled with a mix of typed and handwritten text data, special symbols, complex spreadsheets, and various annotations, technical drawings present a real challenge for extracting data automatically.


The complex nature of technical drawings makes them the perfect candidate for AI data extraction. In fact, using neural models to detect and extract various data from the drawings is the only way to automate their processing. Modern computer vision models and a smart approach to product development can yield a powerful tool for quick processing of any technical drawing.

A Problem With Ready-Made Tools

One quick Google search will show you at least a couple of solutions for processing engineering drawings. Nearly all of them offer wide functionality and promise quick and accurate processing of complex data.


At first glance, this may sound very promising: paying for a monthly subscription to process engineering drawings with high precision. In practice, however, things are often not as smooth.


Ready-made tools often struggle to detect and process rotated elements as their algorithms are only trained to process the “common denominator”, which, in our case, is an engineering drawing with labels and annotations positioned horizontally.

Therefore using a ready-made solution is only suitable for those whose drawings are relatively simple and include only standard data. Any deviation from the “common denominator” will present a challenge for a ready-made tool.

Feature Extraction From Engineering Drawings

This exact situation happened to one of our clients: solutions for processing engineering drawings on the market don’t answer the needs of processing complex or non-standard drawings, leaving them with poor data recognition results.


GD&T annotations carry a lot of very important information that is vital to extract from the drawing for further processing, but their position on the page, in our case, them being positioned at an angle, throws a wrench into the process of analyzing drawings by a premade AI tool.


This is where custom AI development comes into play: AI models trained to detect and extract information from your specific document can solve (almost) any challenge a ready-made tool struggles with.


Here’s how we’ve solved one of the challenges of processing engineering drawings with custom AI model development — extraction of GD&T annotations placed at an angle.

Step 1: Detecting Annotation Position

The first step is to locate the position of annotations on the drawing. AI models can be trained to detect the location of annotations irrespective of their position or rotation angle.


Note: Multipage documents require an additional step of dividing the document into pages and differentiating between different engineering drawings. The same goes for documents that include multiple drawings on each page: you need to first run a model to detect each drawing and extract them from the document.

Step 2: Detect Rotation Angle

Here’s the important part: detecting how the annotation is rotated. The AI model needs to calculate the rotation angle and rotate the annotation to make it horizontal. The cut-out PNGs are then passed on for further processing:

Step 3: Extracting Data From Annotations

After all annotations are detected, rotated, and extracted from the drawing, they are run through a symbol detection engine. Tesseract is a good choice for this as it provides high accuracy of recognition and can work with multiline text and symbols of different heights.


First, you need to find the exact area where the text is located to improve the symbol recognition process. I would recommend using OpenCV as it handles these tasks very well and is relatively easy to work with. Next, the detected text area is handed over to the OCR engine to extract all text and symbols.

Step 4: Analyzing Data

An array of letters, numbers, and symbols need to be interpreted to provide “digestible” data that humans — or a data management system — can understand and process. Detected symbols are separated into groups forming part dimensions, tolerances, fits, and radii.

Step 5: Data Management

Data extracted by an AI system needs to be extracted according to your needs:


  1. JSON files: Perfect for importing the data into existing software,
  2. .XLSX files: An easy-to-read data format perfect for system testing or small batches of data.
  3. Post-processing: Data is additionally processed to send it straight to a digital document handling system; great for those looking for a complete solution.

Summing Up

  1. While the market is full of AI tools for processing documents, they only handle simple files well. Any deviation from the “norm” is better processed with a custom solution.


  2. Custom AI models can handle virtually all data extraction tasks — given the right approach and developer skills.


  3. Engineering drawings are not the only technical drawings I’ve written about, check out how AI can help process architectural drawings here.