Table of Contents
This CLI generates a challenge folder based on the challenge-generator-backend API. The content of the folder is described in the challenge content section.
The usual flow is the following:
graph LR
A((START)) --> B{Select Type}
B --> |Custom| E[\Answer all the questions\]
B --> |Completely Random| F[(Fetch Challenge from the API)]
E --> F
F --> G[/Create the Files/]
G --> H((END))
For further customization, go to the Advanced Usage section.
npx @roeeyn/challenge-generator --help
# To skip a parameter just press Enter
npx @roeeyn/challenge-generator
python3
challengenpx @roeeyn/challenge-generator --programming-language python3 -s
javascript
challengenpx @roeeyn/challenge-generator --programming-language javascript -s
java
challenge:warning: Even if we can create a challenge for Java, we haven't tested it yet on Jude0 so the submission tools may not work correctly as the testframework and run file are not yet implemented.
npx @roeeyn/challenge-generator --programming-language java -s
The created challenge folder contains the following files:
This is the file which contains the challenge description, and some of the examples provided. This is usually provided as a markdown file, so the formatted is done automatically.
The index file contains the initial function of the challenge, which should be given to the user directly.
This is the file which contains all the unit tests for the challenge. This usually make use of the custom testing framework provided, but see the Test Framework File for the details.
This file contains our minified custom testing framework, to validate that the code uploaded from the user is correct. To see the original framework, see the templates folder.
:warning: We have include the most used functions, but there are challenges that contains specific testing for that specific challenge and our test framework may not work on that. This is not usual, though.
This file contains the execution script to run whenever this challenge is uploaded to Judge0.
We can filter most of the params we want the challenge to contain. These are the following:
Flag | Requires Value? | Description | Example |
---|---|---|---|
-V, --version | ❌ | Prints the CLI version | 0.0.1 |
-t, --title | ✅ | Title regex to search | ort$ e.g. Titles which ends with 'ort' |
--edabit-id | ✅ | If you know the value of the edabit id, you can set it directly | 6vSZmN66xhMRDX8YT |
-a, --author | ✅ | Author regex to search | ^M e.g Author which starts with 'M' |
-t, --tags | ✅ | Tags to serch separated by commas | strings,loops |
-d, --min-difficulty | ✅ | The minimum difficulty the challenge should have from 0 (easiest) to 5 (hardest) | 2.5 |
-q, --min-quality | ✅ | The minimum quality the challenge should have from 0 (lowest) to 5 (highest) | 2.5 |
--programming-language | ✅ | The challenge programming language. Only java , javascript , and python3 is supported. |
javascript |
-s, --skip-confirmation | ❌ | If the confirmation message should be skipped, and if the other parametrs should be discarded | N/A |
-v, --verbose | ❌ | Prints debugging information | N/A |
-h, --help | ❌ | Prints this information | N/A |
We created a script (judge0-submissioner) that can help you testing the submissions into judge0 easily, you can find it in the examples folder. The basic usage of this script is the following:
# Maybe you need to give execution access
chmod +x ./examples/judge0-submissioner.sh
# Set the judge0 token (auth, not user)
export JUDGE0_AUTH_TOKEN='YOUR_AUTH_TOKEN'
# Submit the challenge solution and the extension (py or js)
./examples/judge0-submissioner.sh your_challenge_directory your_lang_extension
# Real example
./examples/judge0-submissioner.sh ./challenge-proper-modulo-operator js
Generated using TypeDoc