Indy Autonomous Challenge (IAC)

May 12, 2023

by Brett Hamilton

I recently met with Janam Sanghavi, Systems Engineer for the Indy Autonomous Challenge (IAC), in a reimagined former warehouse near the center of Indianapolis. I wanted to learn more about the cars, and in particular, the Microelectronics utilized in their autonomous system. But before I get into the technical details, I want to express just how impressed I am with IAC from both an operational efficiency/cost perspective and a safety perspective.

The Dallara AV-21 racecars are identical in every detail from the mechanical systems to the installed sensor arrays to the onboard processing units (that occupy the driver’s cockpit area). Each car has two separate communication data links. One link gives the competing team access to telemetry, which is comprised of sensor data for navigating and controlling the car’s speed and position. The other data link is for the IAC team, which monitors all the mechanical and critical safety systems. It also gives the IAC team the ability to independently stop the car at any sign of trouble.

What does autonomous racing have to do with you?

The technology used-by the IAC is the same technology widely implemented in industry, giving the teams hands-on experience with the hardware and software. These technologies have multiple applications in various technology sectors, which offers IAC competitors valuable, practical experience.

There is a lot of software involved with the embedded Microelectronics technology that is not explicitly covered in this article but is certainly important. Below is a representative look at some of the individual electronic components as a way to gauge the technology nodes and foundries used. All of the information can be found online, via open source.

I am impressed that the system is designed around mature, yet very capable and field-proven Commercial Off-the-Shelf (COTS) assemblies and sensors. This offers multiple benefits for the project including reduced cost, (both initially, and in sustainment), mature firmware and software, and experienced support personnel, which enables faster integration and reduced risk. (DoD should take note!)

A dSPACE AUTERA module provides high computation power and a wide data bandwidth (50 Gbit/s) supporting 6 cameras, 3 LiDARs, 3 radar, and 3 GNSS GPS receivers (2 Novatel PowerPaks and 1 VectorNav) with 4 main antennae for localization purposes, all in a compact form factor. The system has significant technical capabilities:

  • Logging bandwidth per AUTERA AutoBox: up to 50 Gbit/s sustained
  • Storage capacity per AUTERA AutoBox up to 32 TB
  • Intel® Xeon® CPU with 12 cores (12 x 2.0 GHz)
  • 32 GB RAM (standard configuration), up-to 512 GB possible on request
  • 2 x USB 3.0, 2 x USB 2.0
  • 4 x 10 GB Ethernet
  • 4 x 1 GB Ethernet
  • 1 x Audio I/O and several multi I/O channels
  • 6 slots for dSPACE qualified extensions, e.g., CAN FD, Ethernet (100/1000/10000 Base-T; 100/1000 BASE-T1), RAW Data Interfaces (GMSL II, FPD-Link III, CSI II), Hardware accelerators (GPUs, e.g., NVIDIA Quadro RTX 6000, FPGAs)
  • In-vehicle power supply 10 – 35 V
  • Shock and vibration tested (ISO 16750-3:2007)
  • Size: 330 x 376 x 156 mm
  • Ambient temperature: up to -20 … 55 °C

The AUTERA module uses a 22nm Intel Xeon processor and a NVIDIA GPU, TU102 at 12nm. One interesting note: I found references to both Taiwan and Korea marked TU102 (images on right). So, is the TU102 manufactured by TSMC or Samsung foundries?

Localization sensor suite features the VectorNav VN-310. The VN-310 is a tactical-grade, high-performance Dual Antenna GNSS-Aided Inertial Navigation System (Dual GNSS/INS). Incorporating the latest inertial sensor and GNSS technology, the VN-310 combines 3-axis accelerometers, 3-axis gyros, 3-axis magnetometers and two Multi-band L1/L2/E5b GNSS receivers into a compact embedded module or ruggedized packaging option to deliver a high-accuracy position, velocity and attitude solution under both static and dynamic conditions. VectorNav proprietary Extended Kalman Filter INS delivers coupled position, velocity, and a continuous attitude solution over the complete 360° range of operation.

The Cisco Ultra-Reliable Wireless Backhaul network supporting wayside-to-vehicle communications has been utilized for multiple events including the Autonomous Challenge @ CES. This network allowed the IAC teams to collect data and interface with the vehicles while they developed and tested their software.

The network also provided the RTK corrections for precise onboard vehicle position data and supported the race control-to-vehicle communications. It supported multiple wayside-to-vehicle connections with single-millisecond latency as racecar speeds approached 200 miles per hour.

In the below image (on the left) 3 lidar units can be seen. Unlike cameras and radar, Luminar’s lidar provides precise three-dimensional sight in all lighting conditions, including blinding light and at night (center image). The auto industry has been instrumental in driving down the  size and cost of lidar.  The below right image features Luminar’s lidar, which they claim is  the “most sensitive, highest dynamic range InGaAs detector in the world when paired with their receiver ASIC”.

Now for a philosophical question:  IAC is not intended to be a man vs. machine or machine vs. human driver.  Instead, their mission is man with machine, driver with machine, akin to Advanced Driver Assistance Systems (ADAS) technology supporting a racecar driver to be faster and safer. However, it does raise the question, when, if ever, will an autonomous racecar beat a human driver?

The average human reaction time is approximately a quarter of a second (250 milliseconds)  — but some humans have better reaction times than others. Fighter pilots, Formula One drivers, and championship video game players fall into the 100 – 120ms bucket on the left side of the curve. So, in theory, the electronic sensor’s precision and reaction time should be much better. But the volume and complexity of all the information that racecar drivers must process are, in a word, daunting.

Human racecar drivers make numerous, very fine adjustments to the car throughout the race, such as adjusting the weight jackers even while driving! The intricacies of racing, drafting, and devising a real-time strategy for passing also need to be considered when comparing an autonomous racecar to a human driver.

If the AI racecar team could get 10 years of race telemetry and related race data to train the systems, a very interesting experiment could be created!

Given the physical nature of racing, human perception (the “feel of the car”), visual cues, and the complexities happening in the inner ear, pure simulation and deep learning might have a tough time keeping up.