2020 DAC System Design Contest

  • Contest announcement: November 2019
  • Registration deadline: January 15, 2020 January 23, 2020
  • Preliminary submissions due: Mar, Apr, May 2020
  • Final submission due: June 15, 2020 Jun 24, 2020
  • Finalist teams announced: June 19, 2020 Jun 29, 2020
  • Award presentation: DAC 2020, July 19-22, 2020.

Each team is required to register at the following link:

The 2020 System Design Contest features embedded system implementation of neural network based object detection for drones. Contestants will receive training dataset provided by our industry sponsor DJI, and a hidden dataset will be used to evaluate the performance of the designs in terms of accuracy and power. Contestants will compete to create the best performing design on a Ultra 96 v2 FPGA board. Grand cash awards will be given to the top three teams in each category. In addition, our industry sponsor Xilinx will provide a limited number of teams successfully registered with subsidized design kit ($75, on a first-come-first-served basis). The award ceremony will be held at 2020 IEEE/ACM Design Automation Conference.

Eligibility: The contest is open to both industry and academia.

Contest Framework

The base design framework is provided here: This repository contains specifications for how your design should connect to our testing infrastructure.

Target Platform

The 2020 contest will use the Ultra 96 v2 development board. During registration you can purchase the Ultra92v2 board for a subsidized rate of $75, courtesy of our sponsor Xilinx.

  • Last year we used the Ultra96 v1 board. If you have this board from last year, you may choose to continue to use it instead of purchasing a v2 board (or you may use both for your testing); however, you should be aware of the potential issues.
  • By default, the Ultra96v2 PYNQ image is not set up to measure power. You will need to follow these instructions to set it up.

Training Dataset

Link to download training dataset:

Frequently Asked Questions

Previous Contest Winning Designs

Each team will submit their design once at the end of March, April and May, and results will be preliminary results will be posted each month. This allows you to check that your solutions is working on our evaluation platform. The final submission is due June 15, 2020.

Preliminary submissions, at minimum, should include your notebook (*.ipynb), and hardware files (*.bit, *.hwh). If your design uses other files, they should be included as well.


  1. Follow the example notebook provided: dac_sdc.ipynb
    1. Your notebook must run without error using the “Run All Cells” command in Jupyter.
    2. As shown in the example notebook, you must time all of your processing. You may exclude reading the images from SD card from your runtime, but all other processing must be tracked.
    3. During all tracked processing time, you must record power usage at a rate of 20 times/second (0.05s interval, as shown in the example notebook).
    4. You must use the provided command (team.save_results_xml(result_rectangle, total_time, energy)) to save your results to file.
    5. Do not hardcode any paths. You should use paths such as dac_sdc.IMG_DIR.
    6. Use the provided function (get_image_batch()) to fetch images in batches.
    7. If you are using the v1 board, be sure to fix the power rail names before submission. The rail should be “5V” and the frame “5V_power”.
    8. The notebook should be split into 4 code cells as described in the example notebook.
  2. The provided file must not be modified. Do not submit your own version. Leave the sys.path.append(os.path.abspath(“../common”)) statement in the notebook so that the official file can be located.
  3. Place all of your files in a single zip archive and submit it.

For the final submission, follow the instructions above. In addition:

  • Submit all source files for your design, in a zip archive.
  • Your design must be available, open-source, and in working condition in order to be considered for an award. You are permitted to use closed source tools (Xilinx's DPU); however, all of your work (any modifications and configurations to commercial, closed-source tools), must be accessible.

We will be using Piazza as a Q&A platform for the contest. Sign up using this link.

The evaluation for the design is based on the accuracy, throughput, and energy consumption.

The score for an individual team, i, is calculated as follows:

Total Score = R_IoU_i * (1 + ES_i)

  • R_IoU_i Accuracy result for team i
  • ES_i: Energy score for team i

The accuracy metric used is Intersection over Union (IoU). To apply IoU to evaluate an object detector, we need:

  • The ground-truth bounding boxes, denoted by Ground Truth (i.e., the labeled bounding boxes that specify where in the image the object is in the xml files).
  • The detected bounding boxes from the model, denoted by DetectionResult

Given this, the IoU for a single image, k is calucated as:

IoU_ik={{Area of Overlap}/{Area of Union}}={{DetectionResult inter GroundTruth}/{DetectionResult union GroundTruth}}

A good example of Intersection over Union can be found here

Then, the accuracy result for the team i over all K images is given as:

R_IoU_i = {sum{k=1}{K}{IoU_ik}}/{K}

A minimum speed requirement of 10FPS has to be met. If the FPS is lower than the requirement, then a penalty to IoU will occur:

R_IoU_i = R_measured_IoU_i * ({min[fps_measured, 10fps]}/{10 fps}).

The energy score for a team is calculated as follows:

ES_i = max lbrace 0, 1+0.2*log_2 {overline {E_I}}/{E_i} rbrace

where E_i is the energy consumption of processing all K images for team </m>i</m>, and overline {E_I} is the average energy usage across all I teams.

  • start.txt
  • Last modified: 2021/01/28 08:00
  • by jgoeders