Appearance
Runtime Interface
PLANQK offers an asynchronous interface for executing services. This is because the execution of services might take several hours (e.g., for training variational circuits). Therefore, each Service API has one endpoint for submitting (aka. starting) a service execution and other endpoints to poll for the execution status and the result. By polling we avoid client timeouts when waiting for long-running operation results.
We support two runtime configurations: (1) Python Template
for Python projects, e.g., according to the PLANQK starter template for Python projects, and (2) Docker
to build a custom Docker Container that can be run as a one-shot process (see the planqk-starter-docker repository as an example).
Python Template
When starting with PLANQK, we recommend using Python Template
as your runtime configuration. It is best to use the PLANQK CLI to create a new project based on our starter template planqk-starter-python. When using planqk init
, just select Python Starter
as the type of starter project.
Lifecycle
The Python Template expects a src
package in the root directory of your project. The src
package must contain a __init__.py
and a program.py
file, containing the run()
method:
python
def run(data: Dict[str, Any], params: Dict[str, Any]) -> Dict[str, Any]:
pass
For each service execution, the runtime creates a new Python process and calls the run()
method. The Python process terminates after the run()
method returns a result or raises an exception.
Next section explains how the data
and params
arguments are used to access input provided by the user through the Service API.
Input
A PLANQK Service expects a JSON object as input, provided by the user through the Service API (see POST /
endpoint) in the form of { "data": <data>, "params": <params> }
.
The runtime uses the top-level properties of the input JSON object and passes them as arguments to the run()
method. For example, given the following input:
json
{
"data": { "values": [1, 2, 3] },
"params": { "round_up": true }
}
The runtime would be able to pass such an input as arguments to the following run()
method:
python
def run(data: Dict[str, Any], params: Dict[str, Any]) -> Dict[str, Any]:
pass
Similarly, the runtime supports the use of Pydantic models to define the input data and parameters:
python
class InputData(BaseModel):
values: List[float]
class InputParams(BaseModel):
round_up: bool
def run(data: InputData, params: InputParams) -> Dict[str, Any]:
pass
Output
Main Result
A service may produce output by returning a JSON-serializable object from the run()
method. The result endpoint of the Service API (GET /{id}/result
) will return such output in the HTTP response body. We recommend to return a dictionary or a Pydantic model. PLANQK automatically tries to serialize such return types into the HTTP response of your Service API.
For example, if the run()
method would return a dictionary like { "sum": 6 }
, the result endpoint would return the following JSON response:
json
{
"sum": 6,
"_embedded": {
"status": {
// omitted for brevity
}
},
"_links": {
"self": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result"
},
"status": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1"
},
"output.json": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result/output.json"
}
}
}
Additional Output (Files)
PLANQK treats any file written to /var/runtime/output
as output of the service. Additional files written to this directory can later be downloaded through the Service API. Respective links are provided in the Service API response, according to the HAL specification (see example above). For example, if you write a file result.txt
to /var/runtime/output
, the result response will contain the following link to download the file: https://<service-endpoint>/<service-execution-id>/result/result.txt
.
We recommend to only use additional files for large outputs that should be downloaded by the user.
Log Output
You can use logging to inform the user about the progress of the service execution or to provide additional information about the result.
You may produce log output, either by printing to stdout or by using an appropriate logging framework. Users can retrieve the log output via the GET /{id}/log
endpoint of the Service API.
DO NOT log sensitive information like passwords, API keys, or any other type of confidential information.
Build Process
The Python Template expects a requirements.txt
file in the root directory of your project. This file should contain all required Python packages for your project. The runtime installs these packages in a virtual environment when containerizing your project.
The runtime also expects a src
package in the root directory of your project. In addition, there must be a program.py
file in the src
package, containing a run()
method. This method is called by the runtime to execute your service.
Docker
If you want to use a custom Docker Container to power your service, you must select Docker
as your runtime configuration (Service Details).
We recommend using "Docker" only if one of the following reasons apply:
- You need OS-level packages not included in the Python Template. With Docker, you have complete control over your base operating system and installed packages.
- Your application is in a language not yet supported by PLANQK, like Go or Rust.
- You need guaranteed reproducible builds. We release regular updates to our coding templates to improve functionality, security, and performance. While we aim for full backward compatibility, using a Dockerfile is the best way to ensure that your production runtime is always in sync with your local builds.
Examples and Starter Template
A starter template for a custom Docker container project can be found in our planqk-starter-docker repository. Another example, using Node.js, can be found in our planqk-samples repository.
Lifecycle
You have to create a Docker container that can be run as a one-shot process. This means the Docker container starts, runs your code once and then exits. You may use exit codes to indicate success (exit code 0
) or failure (exit code 1
) of your code.
Input
PLANQK ensures that the input provided via the Service API in the form of { "data": <data>, "params": <params> }
is mounted into the /var/runtime/input
directory of the running container.
The runtime creates a file for each top-level property of the input JSON object. For example, given the following input:
json
{
"data": { "values": [1, 2, 3] },
"params": { "round_up": true }
}
The runtime creates the following files:
data.json
with the content{ "values": [1, 2, 3] }
params.json
with the content{ "round_up": true }
IMPORTANT
The input for a service must always be a valid JSON object.
Output
PLANQK treats any file written to /var/runtime/output
as the output of the service.
Main Result
Output that should be returned as HTTP response of the result endpoint (GET /{id}/result
) must be written to the file output.json
. For example, if you write the content { "sum": 6 }
to /var/runtime/output/output.json
, the result endpoint will return the following JSON response:
json
{
"sum": 6,
"_embedded": {
"status": {
// omitted for brevity
}
},
"_links": {
"self": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result"
},
"status": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1"
},
"output.json": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result/output.json"
}
}
}
Backward Compatibility
You may also write the content of the output.json
file to stdout, in the following format:
json
PlanQK:Job:MultilineResult
{
"sum": 42
}
PlanQK:Job:MultilineResult
Only the first occurrence of the PlanQK:Job:MultilineResult
block will be considered as output. The content of the block must be a valid JSON object.
Additional Output (Files)
Any other file written to /var/runtime/output
can later be downloaded by the user. Respective links are provided in the Service API response, according to the HAL specification (see example above). For example, if you write a file result.txt
to /var/runtime/output
, the result response will contain the following link to download the file: https://<service-endpoint>/<service-execution-id>/result/result.txt
.
We recommend writing the main result to output.json
and only use additional files for large outputs that should be downloaded by the user.
Log Output
You can use logging to inform the user about the progress of the service execution or to provide additional information about the result.
You may produce log output, either by printing to stdout or by using an appropriate logging framework. Users can retrieve the log output via the GET /{id}/log
endpoint of the Service API.
DO NOT log sensitive information like passwords, API keys, or any other type of confidential information.
Build Process
The Docker runtime expects a Dockerfile
in the root directory of your project. This file should contain the instructions to build your Docker container. The runtime builds the Docker container according to the instructions in the Dockerfile
.
Make sure you use CMD
or ENTRYPOINT
to run your code in the Docker container. For example, if you have a Python script program.py
in a Python package starter
that you want to run, you should add the following line to your Dockerfile
:
CMD ["python", "-m", "starter.program"]