Appearance
Runtime Interface
Kipu Quantum Hub offers an asynchronous interface for executing services. This is because the execution of services might take several hours (e.g., for training variational circuits). Therefore, each Service API has one endpoint for submitting (aka. starting) a service execution and other endpoints to poll for the execution status and the result. By polling we avoid client timeouts when waiting for long-running operation results.
We support two runtime configurations: (1) Python Template for Python projects, e.g., according to the Starter template for Python projects, and (2) Docker to build a custom Docker Container that can be run as a one-shot process (see the starter-docker repository as an example).
Python Template
When starting with the platform, we recommend using Python Template as your runtime configuration. It is best to use the CLI to create a new project based on our starter template starter-python. When using planqk init, just select Python Starter as the type of starter project.
Lifecycle
The Python Template expects a src package in the root directory of your project. The src package must contain a __init__.py and a program.py file, containing the run() method:
python
def run(data: Dict[str, Any], params: Dict[str, Any]) -> Dict[str, Any]:
passFor each service execution, the runtime creates a new Python process and calls the run() method. The Python process terminates after the run() method returns a result or raises an exception.
Next section explains how the data and params arguments are used to access input provided by the user through the Service API.
Input
A Kipu Quantum Hub Service expects input through multiple mechanisms, provided by the user through the Service API. The runtime processes this input and passes it as arguments to the run() method.
JSON Input
The primary input mechanism is a JSON object provided through the Service API (see POST / endpoint) in the form of { "data": <data>, "params": <params> }.
The runtime uses the top-level properties of the input JSON object and passes them as arguments to the run() method. For example, given the following input:
json
{
"data": { "values": [1, 2, 3] },
"params": { "round_up": true }
}The runtime would be able to pass such an input as arguments to the following run() method:
python
def run(data: Dict[str, Any], params: Dict[str, Any]) -> Dict[str, Any]:
passSimilarly, the runtime supports the use of Pydantic models to define the input data and parameters:
python
class InputData(BaseModel):
values: List[float]
class InputParams(BaseModel):
round_up: bool
def run(data: InputData, params: InputParams) -> Dict[str, Any]:
passSpecial Input Types
The runtime supports special input types that are declared as additional parameters in the run() method and map to additional input fields in the JSON object:
Secrets: For securely passing sensitive information like API tokens and credentials.
json{ "data": ..., "$secrets": { "api_token": "my-secret-token" } }The platform ensures that such sensitive information are only accessible during runtime and protected against accidental exposure.
Data Pools: For providing access to large datasets through mounted file systems.
json{ "data": ..., "my_dataset": { "id": "a1b2c3d4-e5f6-7890-1234-567890abcdef", "ref": "DATAPOOL" } }
These special input types are declared using type annotations and are automatically provided by the runtime based on the parameter name and type. See the following sections for detailed documentation on each special input type.
Secrets
Services often require access to sensitive information such as API tokens, credentials, or other confidential data. The platform provides a secure mechanism to pass such secrets to your service through the SecretValue type as a special form of input.
Unlike regular JSON input parameters, secrets are:
- Provided through special environment variables (and not treated as regular JSON input)
- Automatically mapped to
run()method parameters based on naming conventions - Protected against accidental exposure through automatic redaction
- Only accessible and persistent during the runtime of a service execution
You can declare secret parameters as additional arguments in your run() method by using the SecretValue type annotation:
python
from planqk.commons.secret import SecretValue
def run(data: Dict[str, Any], params: Dict[str, Any], api_token: SecretValue) -> Dict[str, Any]:
# Access the secret value
token = api_token.unwrap()
# Use the token...
passThis maps to the following JSON input structure:
json
{
"data": ...,
"params": ...,
"$secrets": {
"api_token": "my-secret-token"
}
}Secrets are fed as special environment variables into the runtime. The parameter name is converted to uppercase with a SECRET_ prefix:
api_token→SECRET_API_TOKENibmToken→SECRET_IBM_TOKENiqm_token→SECRET_IQM_TOKEN
You will have access to the secret value during runtime by the SecretValue interface. The SecretValue type is a secure container that provides several security features to protect sensitive information:
- Single-use unwrapping: Once
unwrap()is called, the secret is locked to prevent accidental reuse. - Automatic redaction: String representations always return
[redacted](via__str__()) orSecretValue([redacted])(via__repr__()) to prevent sensitive data exposure in logs or debugging output.
IMPORTANT
Never log or print the unwrapped secret value. Always use the SecretValue object directly in string representations to ensure automatic redaction.
Data Pools
Data pools provide direct access to mounted file systems as a special form of input, enabling your service to process large datasets that would be impractical to pass through the JSON-based input mechanism.
Unlike regular JSON input parameters, data pools:
- Provide access to pre-uploaded files through a mounted file system
- Are accessed at runtime via
/var/runtime/datapool/{parameter_name} - Support efficient processing of large files (models, datasets, etc.)
- Include a dedicated API for listing and opening files
You can declare data pool parameters as additional arguments in your run() method by using the DataPool type annotation:
python
from planqk.commons.datapool import DataPool
def run(data: Dict[str, Any], params: Dict[str, Any], training_data: DataPool) -> Dict[str, Any]:
# List available files
files = training_data.list_files()
# Open and read a file
with training_data.open("model.pkl", "rb") as f:
model = pickle.load(f)
passThis maps to the following JSON input structure:
json
{
"data": ...,
"params: ...,
"training_data": {
"id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"ref": "DATAPOOL"
}
}Data pools expose files from /var/runtime/datapool/{parameter_name} through a dedicated interface to list and open files securely. For example, if you declare a parameter training_data: DataPool, the runtime will mount the corresponding data pool at /var/runtime/datapool/training_data and provide access through the DataPool interface.
Output
Main Result
A service may produce output by returning a JSON-serializable object from the run() method. The result endpoint of the Service API (GET /{id}/result) will return such output in the HTTP response body. We recommend to return a dictionary or a Pydantic model. The platform automatically tries to serialize such return types into the HTTP response of your Service API.
For example, if the run() method would return a dictionary like { "sum": 6 }, the result endpoint would return the following JSON response:
json
{
"sum": 6,
"_embedded": {
"status": {
// omitted for brevity
}
},
"_links": {
"self": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result"
},
"status": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1"
},
"output.json": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result/output.json"
}
}
}Additional Output (Files)
The platform treats any file written to /var/runtime/output as output of the service. Additional files written to this directory can later be downloaded through the Service API. Respective links are provided in the Service API response, according to the HAL specification (see example above). For example, if you write a file result.txt to /var/runtime/output, the result response will contain the following link to download the file: https://<service-endpoint>/<service-execution-id>/result/result.txt.
We recommend to only use additional files for large outputs that should be downloaded by the user.
Log Output
You can use logging to inform the user about the progress of the service execution or to provide additional information about the result.
You may produce log output, either by printing to stdout or by using an appropriate logging framework. Users can retrieve the log output via the GET /{id}/log endpoint of the Service API.
DO NOT log sensitive information like passwords, API keys, or any other type of confidential information.
Build Process
The Python Template expects a requirements.txt file in the root directory of your project. This file should contain all required Python packages for your project. The runtime installs these packages in a virtual environment when containerizing your project.
The runtime also expects a src package in the root directory of your project. In addition, there must be a program.py file in the src package, containing a run() method. This method is called by the runtime to execute your service.
Docker
If you want to use a custom Docker Container to power your service, you must select Docker as your runtime configuration (Service Details).
We recommend using "Docker" only if one of the following reasons apply:
- You need OS-level packages not included in the Python Template. With Docker, you have complete control over your base operating system and installed packages.
- Your application is in a language not yet supported by the platform, like Go or Rust.
- You need guaranteed reproducible builds. We release regular updates to our coding templates to improve functionality, security, and performance. While we aim for full backward compatibility, using a Dockerfile is the best way to ensure that your production runtime is always in sync with your local builds.
Examples and Starter Template
A starter template for a custom Docker container project can be found in our starter-docker repository. Another example, using Node.js, can be found in our samples repository.
Lifecycle
You have to create a Docker container that can be run as a one-shot process. This means the Docker container starts, runs your code once and then exits. You may use exit codes to indicate success (exit code 0) or failure (exit code 1) of your code.
Input
The platform ensures that the input provided via the Service API in the form of { "data": <data>, "params": <params> } is mounted into the /var/runtime/input directory of the running container.
The runtime creates a file for each top-level property of the input JSON object. For example, given the following input:
json
{
"data": { "values": [1, 2, 3] },
"params": { "round_up": true }
}The runtime creates the following files:
data.jsonwith the content{ "values": [1, 2, 3] }params.jsonwith the content{ "round_up": true }
IMPORTANT
The input for a service must always be a valid JSON object.
Secrets
Services often require access to sensitive information such as API tokens, credentials, or other confidential data. Secrets must be specified in the $secrets property of the input JSON object.
json
{
"data": ...,
"params": ...,
"$secrets": {
"api_token": "my-secret-token"
}
}Secrets are fed as special environment variables into the runtime. The parameter name is converted to uppercase with a SECRET_ prefix:
api_token→SECRET_API_TOKENibmToken→SECRET_IBM_TOKENiqm_token→SECRET_IQM_TOKEN
You will only have access to the secret value during runtime of the service execution.
IMPORTANT
Never log or print the secret value. In Python, you could the SecretValue container (from planqk-commons) that provides several security features to protect sensitive information.
Data Pools
Data pools provide direct access to mounted file systems as a special form of input, enabling your service to process large datasets that would be impractical to pass through the JSON-based input mechanism. You must specify data pools in the respective property of the input JSON object.
json
{
"data": ...,
"params: ...,
"training_data": {
"id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"ref": "DATAPOOL"
}
}Files are respectively mounted to /var/runtime/datapool/{parameter_name} For example, if you declare a field training_data, like above, the runtime will mount the corresponding data pool at /var/runtime/datapool/training_data.
Output
The platform treats any file written to /var/runtime/output as the output of the service.
Main Result
Output that should be returned as HTTP response of the result endpoint (GET /{id}/result) must be written to the file output.json. For example, if you write the content { "sum": 6 } to /var/runtime/output/output.json, the result endpoint will return the following JSON response:
json
{
"sum": 6,
"_embedded": {
"status": {
// omitted for brevity
}
},
"_links": {
"self": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result"
},
"status": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1"
},
"output.json": {
"href": "...service endpoint.../ee49be82-593d-4d12-b732-ab84e0b11be1/result/output.json"
}
}
}Backward Compatibility
You may also write the content of the output.json file to stdout, in the following format:
json
PlanQK:Job:MultilineResult
{
"sum": 42
}
PlanQK:Job:MultilineResultOnly the first occurrence of the PlanQK:Job:MultilineResult block will be considered as output. The content of the block must be a valid JSON object.
Additional Output (Files)
Any other file written to /var/runtime/output can later be downloaded by the user. Respective links are provided in the Service API response, according to the HAL specification (see example above). For example, if you write a file result.txt to /var/runtime/output, the result response will contain the following link to download the file: https://<service-endpoint>/<service-execution-id>/result/result.txt.
We recommend writing the main result to output.json and only use additional files for large outputs that should be downloaded by the user.
Log Output
You can use logging to inform the user about the progress of the service execution or to provide additional information about the result.
You may produce log output, either by printing to stdout or by using an appropriate logging framework. Users can retrieve the log output via the GET /{id}/log endpoint of the Service API.
DO NOT log sensitive information like passwords, API keys, or any other type of confidential information.
Build Process
The Docker runtime expects a Dockerfile in the root directory of your project. This file should contain the instructions to build your Docker container. The runtime builds the Docker container according to the instructions in the Dockerfile.
Make sure you use CMD or ENTRYPOINT to run your code in the Docker container. For example, if you have a Python script program.py in a Python package starter that you want to run, you should add the following line to your Dockerfile:
CMD ["python", "-m", "starter.program"]
