Tiny ImageNet Visual Recognition Challenge

Welcome to the tiny ImageNet evaluation server. Tiny ImageNet Challenge is the default course project for Stanford CS231N. It runs similar to the ImageNet challenge (ILSVRC). The goal of the challenge is for you to do as well as possible on the Image Classification problem. You will submit your final predictions on a test set to this evaluation server and we will maintain a class leaderboard.

Tiny Imagenet has 200 classes. Each class has 500 training images, 50 validation images, and 50 test images. We have released the training and validation sets with images and annotations. We provide both class labels and bounding boxes as annotations; however, you are asked only to predict the class label of each image without localizing the objects. The test set is released without labels. You can download the whole tiny ImageNet dataset here.

We use test set error rate, the fraction of test images that are incorrectly classified by the model, to measure the performance. To submit your predictions on the test set, name your submission file as <your SUNetID>.txt and upload it from your local machine. Your submission should be a two-column file with 10,000 lines. Each line contains a pair of test image filename and its predicted class id. One sample line might look like:

test_9925.JPEG n01910747

This file illustrates a submission of random guessing, giving us a chance accuracy 0.005 (1/200). Note that, the class ids correspond to synsets in ImageNet. For example, you can browse images and metadata of class id n01910747 using this link.

Your rank will be updated on the leaderboard once the submission is accepted. Ill-formatted files will be automatically rejected. To prevent wild guessing, you are allowed to make a new submission 2 hours after the last one. We strongly recommend you to save your best submission file on disk for grading purpose. Every submission overwrites previous records on the server. By default, your full name will be displayed on the leaderboard. You are welcome to contact us if you would like to use a funny nickname instead. Please contact via email in case you have any questions about the evaluation server. Good luck!


#NameError Rate# Submissions
2Kim,Hansohl Eliott0.31117
5Zhai,Andrew Huan0.4464
8Ebrahimi,Mohammad Sadegh0.5615
9Ting,Jason Ming0.61617
10Random Guesser0.99512