Democratizing precision measurement with geometry-driven computer vision This article describes an independent engineering exploration. It does not represent any employer, institution, or commercial product. This article describes an independent engineering exploration. It does not represent any employer, institution, or commercial product. This article describes an independent engineering exploration. It does not represent any employer, institution, or commercial product. High-precision part inspection has traditionally required expensive hardware: coordinate measuring machines (CMMs), optical comparators, laser scanners, or telecentric vision systems. These tools are powerful—but they are also slow, costly, and often inaccessible to small and medium-sized manufacturers. At the same time, a large portion of real-world inspection tasks are fundamentally 2D geometric problems: 2D geometric problems Measuring hole diameters Verifying spacing between features Inspecting slots, fillets, chamfers, and outer profiles Comparing physical parts against engineering drawings Measuring hole diameters Verifying spacing between features Inspecting slots, fillets, chamfers, and outer profiles Comparing physical parts against engineering drawings This gap motivated me to build an image-based part inspection system that extracts geometric features directly from images and converts them into measurable, CAD-ready representations—using only low-cost cameras and computation. image-based part inspection system This article explains the engineering pipeline behind that system, why traditional approaches struggle, and how a geometry-first vision architecture enables reliable, high-precision measurement without expensive metrology hardware. The Core Challenge: Images Are Not Geometry Engineering drawings assume perfect edges and clean curves. Real images do not. Even under controlled lighting, images of machined parts suffer from: Illumination gradients and reflections Surface texture and machining marks Broken or partially occluded edges Noise from low-cost optics Perspective distortion Illumination gradients and reflections Illumination gradients and reflections Surface texture and machining marks Surface texture and machining marks Broken or partially occluded edges Broken or partially occluded edges Noise from low-cost optics Noise from low-cost optics Perspective distortion Perspective distortion A single part may contain concentric holes, chamfers, fillets, slots, and partial arcs—often with only fragments of each feature visible. Naive solutions such as template matching, Hough transforms, or object detection models tend to fail because they operate at the pixel or pattern level, while inspection is a geometric reasoning problem. pixel or pattern level geometric reasoning problem To solve this, I designed a pipeline that elevates pixels into geometry as early as possible. System Overview The system follows a multi-stage architecture: Image normalization and calibration Precision edge extraction Contour segmentation into geometric primitives Geometry-driven feature recognition Dimensional measurement in calibrated space CAD reconstruction and validation Image normalization and calibration Image normalization and calibration Precision edge extraction Precision edge extraction Contour segmentation into geometric primitives Contour segmentation into geometric primitives Geometry-driven feature recognition Geometry-driven feature recognition Dimensional measurement in calibrated space Dimensional measurement in calibrated space CAD reconstruction and validation CAD reconstruction and validation Each stage aggressively filters noise while preserving geometric meaning. 1. Image Normalization: Stabilizing the Input Lighting variation is one of the biggest threats to measurement accuracy. Before any geometry is extracted, the image is normalized using: Illumination compensation (low-frequency shading removal) Camera calibration and distortion correction Contrast enhancement tuned for metal surfaces import cv2 import numpy as np def normalize_and_undistort(img_gray, K, dist): """ img_gray: grayscale image K, dist: camera intrinsics & distortion coefficients return: undistorted + illumination-normalized image """ und = cv2.undistort(img_gray, K, dist) # illumination flattening (remove low-frequency shading) low = cv2.GaussianBlur(und, (0, 0), sigmaX=25, sigmaY=25) norm = cv2.divide(und, low + 1e-6, scale=255) return norm.astype(np.uint8) Illumination compensation (low-frequency shading removal) Illumination compensation (low-frequency shading removal) Camera calibration and distortion correction Camera calibration and distortion correction Contrast enhancement tuned for metal surfaces import cv2 import numpy as np def normalize_and_undistort(img_gray, K, dist): """ img_gray: grayscale image K, dist: camera intrinsics & distortion coefficients return: undistorted + illumination-normalized image """ und = cv2.undistort(img_gray, K, dist) # illumination flattening (remove low-frequency shading) low = cv2.GaussianBlur(und, (0, 0), sigmaX=25, sigmaY=25) norm = cv2.divide(und, low + 1e-6, scale=255) return norm.astype(np.uint8) Contrast enhancement tuned for metal surfaces import cv2 import numpy as np def normalize_and_undistort(img_gray, K, dist): """ img_gray: grayscale image K, dist: camera intrinsics & distortion coefficients return: undistorted + illumination-normalized image """ und = cv2.undistort(img_gray, K, dist) # illumination flattening (remove low-frequency shading) low = cv2.GaussianBlur(und, (0, 0), sigmaX=25, sigmaY=25) norm = cv2.divide(und, low + 1e-6, scale=255) return norm.astype(np.uint8) import cv2 import numpy as np def normalize_and_undistort(img_gray, K, dist): """ img_gray: grayscale image K, dist: camera intrinsics & distortion coefficients return: undistorted + illumination-normalized image """ und = cv2.undistort(img_gray, K, dist) # illumination flattening (remove low-frequency shading) low = cv2.GaussianBlur(und, (0, 0), sigmaX=25, sigmaY=25) norm = cv2.divide(und, low + 1e-6, scale=255) return norm.astype(np.uint8) The goal is not visual beauty, but geometric consistency—straight edges must remain straight, and circular features must not warp. geometric consistency 2. Precision Edge Extraction Measurement accuracy is limited by edge localization accuracy. Instead of relying on standard edge detectors alone, the system uses a staged approach: Adaptive thresholding Edge enhancement filtering Morphological cleanup Subpixel edge refinement ### Precision Edge Extraction Accurate measurement depends on stable and repeatable edge localization. The following simplified snippet illustrates how edge masks are generated for subsequent geometric fitting. ```python def extract_edges(img_gray): bw = cv2.adaptiveThreshold( img_gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 31, 3 ) edges = cv2.Canny(bw, 60, 150) return edges Adaptive thresholding Adaptive thresholding Edge enhancement filtering Edge enhancement filtering Morphological cleanup Morphological cleanup Subpixel edge refinement ### Precision Edge Extraction Accurate measurement depends on stable and repeatable edge localization. The following simplified snippet illustrates how edge masks are generated for subsequent geometric fitting. ```python def extract_edges(img_gray): bw = cv2.adaptiveThreshold( img_gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 31, 3 ) edges = cv2.Canny(bw, 60, 150) return edges Subpixel edge refinement ### Precision Edge Extraction Accurate measurement depends on stable and repeatable edge localization. The following simplified snippet illustrates how edge masks are generated for subsequent geometric fitting. ```python def extract_edges(img_gray): bw = cv2.adaptiveThreshold( img_gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 31, 3 ) edges = cv2.Canny(bw, 60, 150) return edges ### Precision Edge Extraction Accurate measurement depends on stable and repeatable edge localization. The following simplified snippet illustrates how edge masks are generated for subsequent geometric fitting. ```python def extract_edges(img_gray): bw = cv2.adaptiveThreshold( img_gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 31, 3 ) edges = cv2.Canny(bw, 60, 150) return edges The output is a thin, continuous edge representation suitable for geometric fitting, even when edges are partially broken. 3. From Contours to Geometry Once edges are extracted, the system stops thinking in pixels. Contours are decomposed into geometric primitives, including: geometric primitives Lines Circular arcs Full circles Composite features (slots, keyways) Lines Circular arcs Full circles Composite features (slots, keyways) def contours_to_segments(edge_img): """ return: list of polyline segments (each is Nx2 points) """ cnts, _ = cv2.findContours(edge_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) segments = [] for c in cnts: pts = c.squeeze(axis=1) # (N,2) if len(pts) < 30: continue segments.append(pts) return segments def contours_to_segments(edge_img): """ return: list of polyline segments (each is Nx2 points) """ cnts, _ = cv2.findContours(edge_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) segments = [] for c in cnts: pts = c.squeeze(axis=1) # (N,2) if len(pts) < 30: continue segments.append(pts) return segments Each segment is classified based on curvature behavior and validated using robust fitting techniques. Distance-based support checks ensure that fitted geometry is actually supported by the image, rather than being an artifact of noise. This approach allows the system to recover geometry even when features are incomplete or fragmented. 4. Geometry-Driven Reasoning (The Critical Layer) This is where the system differentiates itself from most vision pipelines. Instead of detecting features independently, it reasons about relationships between primitives: relationships between primitives Concentric circles are grouped to identify counterbores or countersinks Line–arc–line patterns are interpreted as slots Short linear segments at specific angles are classified as chamfers Constant-curvature connectors between edges are recognized as fillets def build_dist_map(edge_img): inv = cv2.bitwise_not(edge_img) dist = cv2.distanceTransform(inv, cv2.DIST_L2, 3) return dist def support_score(dist_map, samples_xy, max_d=1.5): """ samples_xy: (M,2) points sampled on fitted primitive return: ratio of samples close to real edges """ h, w = dist_map.shape ok = 0 total = 0 for x, y in samples_xy: x, y = int(round(x)), int(round(y)) if 0 <= x < w and 0 <= y < h: total += 1 if dist_map[y, x] <= max_d: ok += 1 return ok / max(total, 1) Concentric circles are grouped to identify counterbores or countersinks Concentric circles are grouped to identify counterbores or countersinks Line–arc–line patterns are interpreted as slots Line–arc–line patterns are interpreted as slots Short linear segments at specific angles are classified as chamfers Short linear segments at specific angles are classified as chamfers Constant-curvature connectors between edges are recognized as fillets def build_dist_map(edge_img): inv = cv2.bitwise_not(edge_img) dist = cv2.distanceTransform(inv, cv2.DIST_L2, 3) return dist def support_score(dist_map, samples_xy, max_d=1.5): """ samples_xy: (M,2) points sampled on fitted primitive return: ratio of samples close to real edges """ h, w = dist_map.shape ok = 0 total = 0 for x, y in samples_xy: x, y = int(round(x)), int(round(y)) if 0 <= x < w and 0 <= y < h: total += 1 if dist_map[y, x] <= max_d: ok += 1 return ok / max(total, 1) Constant-curvature connectors between edges are recognized as fillets def build_dist_map(edge_img): inv = cv2.bitwise_not(edge_img) dist = cv2.distanceTransform(inv, cv2.DIST_L2, 3) return dist def support_score(dist_map, samples_xy, max_d=1.5): """ samples_xy: (M,2) points sampled on fitted primitive return: ratio of samples close to real edges """ h, w = dist_map.shape ok = 0 total = 0 for x, y in samples_xy: x, y = int(round(x)), int(round(y)) if 0 <= x < w and 0 <= y < h: total += 1 if dist_map[y, x] <= max_d: ok += 1 return ok / max(total, 1) def build_dist_map(edge_img): inv = cv2.bitwise_not(edge_img) dist = cv2.distanceTransform(inv, cv2.DIST_L2, 3) return dist def support_score(dist_map, samples_xy, max_d=1.5): """ samples_xy: (M,2) points sampled on fitted primitive return: ratio of samples close to real edges """ h, w = dist_map.shape ok = 0 total = 0 for x, y in samples_xy: x, y = int(round(x)), int(round(y)) if 0 <= x < w and 0 <= y < h: total += 1 if dist_map[y, x] <= max_d: ok += 1 return ok / max(total, 1) This rule-based geometric reasoning layer encodes engineering knowledge directly into the system, allowing it to interpret parts in ways that purely data-driven models cannot. 5. Dimensional Measurement Once geometry is reconstructed, measurement becomes deterministic. Because all features exist in calibrated coordinate space, the system can compute: Hole diameters and positions Center-to-center distances Slot widths and lengths Fillet radii Chamfer dimensions Outer boundary dimensions Hole diameters and positions Center-to-center distances Slot widths and lengths Fillet radii Chamfer dimensions Outer boundary dimensions def px_to_mm(x_px, scale_mm_per_px): return x_px * scale_mm_per_px def measure_hole(circle, scale_mm_per_px): """ circle: {cx, cy, r_px} """ diameter_mm = px_to_mm(2 * circle["r_px"], scale_mm_per_px) return {"diameter_mm": diameter_mm, "center_px": (circle["cx"], circle["cy"])} def px_to_mm(x_px, scale_mm_per_px): return x_px * scale_mm_per_px def measure_hole(circle, scale_mm_per_px): """ circle: {cx, cy, r_px} """ diameter_mm = px_to_mm(2 * circle["r_px"], scale_mm_per_px) return {"diameter_mm": diameter_mm, "center_px": (circle["cx"], circle["cy"])} In controlled tests with consumer-grade industrial cameras, typical accuracy falls within ±0.02–0.10 mm, depending on feature size and image resolution—without telecentric lenses or specialized optics. ±0.02–0.10 mm 6. CAD Reconstruction and Output The final output is a structured geometric representation: Lines Arcs Circles Feature metadata Lines Arcs Circles Feature metadata This representation can be: Exported as DXF Compared directly against reference CAD Used for automated inspection or tolerance checks Visualized as an overlay for operator verification def to_geometry_objects(lines, arcs, circles): objs = [] for ln in lines: objs.append({"type": "LINE", "p1": ln["p1"], "p2": ln["p2"]}) for a in arcs: objs.append({"type": "ARC", "c": a["c"], "r": a["r"], "a0": a["a0"], "a1": a["a1"]}) for c in circles: objs.append({"type": "CIRCLE", "c": (c["cx"], c["cy"]), "r": c["r_px"]}) return objs Exported as DXF Exported as DXF Compared directly against reference CAD Compared directly against reference CAD Used for automated inspection or tolerance checks Used for automated inspection or tolerance checks Visualized as an overlay for operator verification def to_geometry_objects(lines, arcs, circles): objs = [] for ln in lines: objs.append({"type": "LINE", "p1": ln["p1"], "p2": ln["p2"]}) for a in arcs: objs.append({"type": "ARC", "c": a["c"], "r": a["r"], "a0": a["a0"], "a1": a["a1"]}) for c in circles: objs.append({"type": "CIRCLE", "c": (c["cx"], c["cy"]), "r": c["r_px"]}) return objs Visualized as an overlay for operator verification def to_geometry_objects(lines, arcs, circles): objs = [] for ln in lines: objs.append({"type": "LINE", "p1": ln["p1"], "p2": ln["p2"]}) for a in arcs: objs.append({"type": "ARC", "c": a["c"], "r": a["r"], "a0": a["a0"], "a1": a["a1"]}) for c in circles: objs.append({"type": "CIRCLE", "c": (c["cx"], c["cy"]), "r": c["r_px"]}) return objs def to_geometry_objects(lines, arcs, circles): objs = [] for ln in lines: objs.append({"type": "LINE", "p1": ln["p1"], "p2": ln["p2"]}) for a in arcs: objs.append({"type": "ARC", "c": a["c"], "r": a["r"], "a0": a["a0"], "a1": a["a1"]}) for c in circles: objs.append({"type": "CIRCLE", "c": (c["cx"], c["cy"]), "r": c["r_px"]}) return objs At this point, the image has effectively been converted into a measurable, machine-readable part description. Why Not Use Deep Learning? Deep learning excels at classification and segmentation, but inspection is fundamentally different: Precision matters more than recognition Subpixel accuracy is required Engineering constraints (tangency, curvature continuity) are rule-based Annotated datasets for manufacturing geometry are scarce Precision matters more than recognition Subpixel accuracy is required Engineering constraints (tangency, curvature continuity) are rule-based Annotated datasets for manufacturing geometry are scarce In practice, the most robust solution is a hybrid approach: hybrid approach Classical computer vision + robust geometric fitting + rule-based reasoning, with AI used selectively rather than universally. Classical computer vision + robust geometric fitting + rule-based reasoning, with AI used selectively rather than universally. Why This Matters Lowering the cost and complexity of precision inspection has real impact: Small manufacturers gain access to automated QC Measurement throughput increases Manual inspection errors decrease Engineering drawings and physical parts can be analyzed with the same pipeline Small manufacturers gain access to automated QC Measurement throughput increases Manual inspection errors decrease Engineering drawings and physical parts can be analyzed with the same pipeline By replacing expensive hardware with intelligent geometry-driven vision, inspection becomes faster, cheaper, and more accessible. base64 images have been removed. Instead, use an URL or a file from your device base64 images have been removed. Instead, use an URL or a file from your device What’s Next Future extensions include: Multi-view reconstruction for 3D features Automated tolerance evaluation Integration with robotic handling AI-assisted interpretation of ambiguous geometry Multi-view reconstruction for 3D features Automated tolerance evaluation Integration with robotic handling AI-assisted interpretation of ambiguous geometry But the core principle will remain the same: Inspection is not about seeing objects. It is about understanding geometry. Inspection is not about seeing objects. It is about understanding geometry. Inspection is not about seeing objects. It is about understanding geometry. Author Note This system is part of an ongoing engineering effort and continues to evolve. The ideas presented here reflect practical lessons learned from real-world image-based measurement challenges.