Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In short: developers and DevOps engineers. But there's always a bit more nuanced answer.
If you enjoy developing software more than configuring the containerized app you just created, dyrector.io is for you.
The platform helps you to spark up your containerized stack on any infra hassle-free, so you can spend more time doing things you like.
Imagine the platform as a hub where all the components of your infrastructure can be accessed, and containers can be started, shut down or restarted.
This hub not only grants you access to these resources, but also enables your teammates to interact with your applications in a self-service manner. You'll be only needed to help them with deployments and troubleshooting when necessary.
Tinkering with stuff is your passion, and we're here to help you with that. If a project you'd like to use has an OCI compatible image, you can set it up on your local machine, on-premises server or cloud infrastructure.
Tip for self-hosting enthusiasts: you can self-manage dyrector.io for free, unlimited, forever.
dyrector.io is the right platform for startups eyeing containerization. The platform's release management capabilities help you spend your funds on meaningful stuff instead of wasting valuable engineering hours on repeated, mundane tasks.
As an organization, you might already have invested in some type of infrastructure. Moving your services from that infrastructure is painful and resource-heavy. With the platform, it's completely unnecessary, because the platform can be used with your infrastructure right away. And in case you decide to leave the platform, you can do so by exporting YAML-files to avoid losing your data.
It's an open-source continuous delivery platform offering self-service deployment and configuration management capabilities via a GUI and API to any cloud or on-premises infrastructure.
You can find the project's repository on GitHub.
You already have a Kubernetes cluster and want to manage it with ease.
Multiple users on your team need to have access to your containers for management.
You'd like to configure versions once, without repetition.
Your team needs self-service deployments for QA or sales purposes.
A delivery platform that helps you by substituting Docker and Kubernetes command line interactions with abstractions. You're able to configure any OCI compatible containers with a configuration screen, and in a JSON-editor, as well.
You can use the platform by installing its agents on your infrastructure. The agent will communicate with the platform to conduct interactions with containers running on your infra.
The platform is ready to interact with your already existing infrastructure and clusters right away. The platform interacts with any existing cloud & on-premises infrastructure, as we don't offer infra. You can deploy images from any registry you have access to.
The platform can be used without moving your services to a brand new infrastructure. To provide quick setup, we don't offer any infrastructure to our users.
Chat notifications on Discord, Slack, and Teams let you and your team know about newly made deployments and versions to increase collaboration.
This is an extremely compressed guide to new users who would like to give cloud-hosted dyrector.io a look as quick as possible.
Go to or .
If you just signed up, check the inbox of the e-mail address you added for confirmation e-mail and verify your account with the link.
Create a team.
On the nodes page a new Docker or Kubernetes orchestrated node.
to the platform. Docker Hub is available by default. Bypass this step and step 5 by saving one of the as a project.
.
to your project.
the images if needed.
Deploy. 🎬
dyrector.io consists of 2 major components:
an agent that's installed on your environment where you want to deploy images – crane is the agent that communicates with Kubernetes API and dagent is the agent for communication with Docker API, both written in Go –,
and a platform (UI developed in React.js, Next.js. Backend developed in Node.js, Nest.js). Communication between the agents and the platform takes place in gRPC with TLS encryption. The data is managed in a PostgreSQL database mapped by Prisma ORM.
Alpha is suggested for non-production purposes. If you want to use it for production, reach out to us at .
Self-hosted dyrector.io is free and will always be, without feature and usage restrictions.
The platform abstracts away interactions with containers. You can deploy images, start, restart, and delete containers with the platform. Logs and container inspection allows you to dig deeper into a container's operation when needed.
If you're passionate about self-hosting the tools you use every day, you can add all of your environments to the platform as nodes, where you can manage all the containers you run. Instead of setting up dashboards and adding each service individually, you can interact with all of your ecosystem through one GUI.
Pro tip: you can , too.
The key purpose of multi-instance deployments is to avoid repetitive tasks when the same stack needs to be deployed to dozens or hundreds of deployment targets. After configuring the stack once, you're able to select all the nodes where you'd like to set it up.
Below you can see a flowchart that illustrates how you can deploy the same stack to multiple environments at the same time.
Another scenario is when a 3rd-party redistributes the business application your organization develops. In the flowchart below you can see how this process differs from direct distribution as described above.
In progress: Bundled configurations enable your team to assign templatized configurations through the whole process.
QA, PMs and salespeople can spark up your stack instantly on their own by deploying the stack to their local machine or a demo environment as a project.
Installing Next.js apps with NGINX to a VPS or a Kubernetes cluster
Installing single images (like RabbitMQ or a database)
Checking server/cluster status
Getting started
Give the platform a quick try with this guide.
Self-managed guide
Spark up your self-managed platform for 100% free.
Who is it for?
Find out how it can help you.
Nodes are the environments that'll be the target of your deployments.
Node setups require admin or root privilege. Without that, it's not possible to install dyrector.io's agent on your node in the case of both Docker and Kubernetes.
If you're curious about the install scripts of the agent, you can check them out at the links below:
Step 1: Open Nodes on the left and click ‘Add’ on top right.
Step 2: Enter your node’s name and select its icon.
Tip: You can write a description so others on your team can understand what’s the purpose of this node.
Step 3: Click ‘Save’ and select the type of technology your node uses. You can select
Docker Host,
and Kubernetes Cluster.
Docker Host requirements are the following:
a Linux host/VPS,
Docker or Podman installed on host,
ability to execute the script on host.
Kubernetes Cluster requirements are the following:
a Kubernetes cluster,
kubectl authenticated, active context selected,
ability to run these commands.
Users are able to opt-in to install Traefik, as well. In that case they need to add an ACME email address where they'll be updated when their certification expires.
Step 4: Depending on your node's OS, select whether you'd like to generate a shell or a powershell script. Shell scripts are supported on Linux nodes, powershell scripts are designed to be used with Windows nodes.
Step 5: After picking the technology and the script's type, click the ‘Generate script’ button to generate a one-liner script.
Step 6: Run the one-liner in sh or bash.
The one-liner will generate a script that’ll set the platform’s agent up on your node.
Information and status of your node will show on the node's page, so you can see if the setup is successful right away.
Now you're ready to setup your product and one step closer deploy your application.
Keep track of the actions your teammates execute. This way if a malfunction occurs, you can understand if it was due to a specific action or an outside factor.
We track user ID, time and unsensitive data related to actions on the platform. Audit logging on our platform only monitors actions executed by users via the platform.
We don't get access to self-managed team logs.
Container routing is available for nodes where Traefik is enabled on node install.
Expose strategy of the container you wish to be exposed should be set to HTTP or HTTPS, instead of none. In the routing section of the configuration screen, you need to specify a domain and a port. The additional variables are optional.
When a path is specified, the Traefik container will exclusively route requests with URLs containing that designated path prefix.
Enabling path stripping will result in forwarding the request without including the specified path.
The generated Traefik router's name will be automatically generated as "prefix + name."
If the router uses HTTPS, all necessary Let's Encrypt labels will also be added. It's important to note that middlewares can only be applied by adding them as custom Docker labels.
If you have Docker labels set for your container, you can specify them as key-value pairs in the config screen under the Docker labels section.
Deployment – Initiate deployments to a single or multiple deployment environments called nodes. A node can be in any cloud, VPN, or on-premises environment.
Release & Configuration management – One-time configuration for releases in a configuration screen or a JSON. Configure releases in real time with your teammates. In progress: Specify bundled configuration variables instead of going through them one-by-one, repeatedly.
Instant test environments – Spark up your stack in an instant on your local machine after adding it as a node without assistance.
Monitoring – Check container and deployment statuses via the platform and interfere when required.
Audit log – Audit log allows teams to trace back activity that might have caused an anomaly.
Chat notifications – Automated chat messages on Discord, Slack, and Teams when a teammates makes an action on the platform, so they don't have to go out of their way to let others know about completed tasks.
Changelogs – You're able to create changelogs at ease based on commit messages with the help of Conventional Commits, so your team understands the purpose of new versions. This simplifies communication between departments who work on the product and outsiders, for example decision-makers. The generated changelogs can be sent out via e-mail or any chatbot integration.
Secret management – Store and manage sensitive data with our Vault integration powered by HashiCorp.
RBAC – Role based access control lets you distribute privileges based on their responsibilities to your teammates. This is important to prevent any wrongdoing in case a user profile gets corrupted.
ChatOps solutions – Interact with the platform and your stack via chat messages on the chat platform your team uses.
Vaultwarden is an unofficial self-hosted Bitwarden implementation. Image is available with latest
tag as a template.
After the Node where you'd like to run Vaultwarden is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, self-managed Vaultwarden is ready to use at localhost:80 by default.
The platform is currently in the making, and so is its pricing. The public alpha hosted by us is completely free. If you want to give the platform a try hosted by yourself, you can do it, free of charge, forever.
Our plan with pricing is to have one, limited free package, and multiple paid plans with different usage caps. Paid users will have access to prioritized support compared to free users, and they're going to be able to manage more nodes, projects, and deployments in the platform.
The platform isn’t a CI/CD tool but it covers certain steps of CD and its release management related aspects.
Both GitLab’s CI/CD and GitHub’s Action tools can be integrated. Its main benefit is that the platform enables you to manage multi-instance deployments to different environments, and it enables you to manage several different configurations. You can bring multiple services and operational practices under the same hood.
Deployments: you can integrate the services and pipelines you already use to build your applications, and then deploy them to any desired environment.
Change log generation: increase the transparency of version management by generating change logs based on commit messages left by your developers.
Configuration management: simplified configuration management to keep your infrastructure under control while you focus on delivering value to your users.
Secrets management: HashiCorp’s Vault integration enables you to store your sensible keys, configuration variables, tokens and other things secure.
Monitoring: get notifications and alerts on multiple channels about what happens on your infrastructure.
Integrate Prometheus to track and monitor your application’s and infrastructure’s performance.
Make the data tracked by Prometheus visible and easy-to-interpret to non-technical stakeholders with Grafana.
Create and manage logs of events occurring for analytics purposes with Graylog.
If you’ve created a versioned project, you’ll need to add a version to it. This version could be a rolling or an incremental version.
Step 1: Open Projects page on the left and select the project you want to set a version to.
Step 2: Click ‘Add version’ on the top right.
Step 3: Define a name for your project’s version in the name field.
Step 4: Enter a changelog if you’d like to.
Step 5: Select if you’d like to make this a rolling or an incremental version. Click here for more details on the difference between the two.
Step 6: Select if you’d like to turn this into a default version all the future versions will automatically inherit images and their configurations.
Step 7: Click ‘Save’ on top right.
You can create configuration bundles which you're able to apply to deployments later. These are templatized key-value pairs for environment variables.
Configuration bundles are shared between the members of a team.
Configuration bundles act as .env files.
Multiple bundles can be applied to a deployment.
Keys can't conflict between bundles applied to a deployment.
Values of a bundle can be overwritten manually when you set up the deployment. This won't impact the value stored in the bundle.
WordPress is the most popular CMS. More than 43% of all websites are managed with it. You can quickly set it up wherever you'd like to manage your content with WordPress. Image is available with latest
tag.
After the Node where you'd like to run WordPress is registered, you can setup WordPress by following the steps of deployments as documented here.
Once the deployment is successful, WordPress is ready to use at localhost:4444 by default, as seen below.
To better understand how you’ll be able to manage your applications on the platform, there are certain components – entities and concepts – within the platform that need to be cleared.
A group of users working on the same project. Without an invite to a team, on your first login you must define a team. Later you can invite others to your team, they will have access to every data and components, including configurations and secrets that belong to the team. To limit access to the data of one team, create a new team that'll have its own components.
Teams can have multiple Nodes, Projects and Registries.
Nodes are the deployment target environments where either of the agents are installed. A node can be any cloud or on-premises infrastructure. Without registering a node, your team can't use the platform. We suggest to be the first step you do after creating your team.
dyrector.io doesn't provide infrastructure. You can use the platform with your already existing infra.
Node setups require admin or root privilege. Without that, it's not possible to the platform's agent on your node.
If you're curious about the install scripts of the agent, you can check them out at the link below:
Container registries where the images that you plan to deploy are stored. You can use any Docker Registry API V2 compatible registry with the platform, including Docker Hub, GitHub, GitLab, Google, Azure registries. Unchecked sources are supported, too, as long as they're V2 compatible. Docker Hub Library is available to every user by default.
Projects are the fundamental building blocks of dyrector.io, as these are the deployable units that contains the images with the corresponding configuration. These are the stacks you’ll manage in dyrector.io. There are two types of Projects, as seen below.
Versionless Project: these projects have only one hidden version and cannot be rolled back. These are mostly useful for testing purposes.
Versioned Project: versioned projects can have multiple versions. The different versions can have different configuration and images. Semantic versioning is suggested.
Deployment is the process of delivering the images that make up a project to the environment you added as a node by installing an agent on it. You can assign environment and configuration variables to the deployments, and also edit deployments depending on the type of project you want to set up on your node.
Rolling version
Rolling versions have one deployment per version on each node, because a new deployment will overwrite the existing stack on the node. These type of versions can have multiple deployment prefixes since you can deploy them to multiple nodes. Only In progress
deployments of rolling versions aren't mutable and deletable. Once the deployment of a rolling version is complete, you can't adjust or delete that.
Incremental version
You can deploy multiple incremental versions of the same project to a node, therefore different incremental versions can have the same deployment prefix. The first deployment of an incremental version is mutable as long as it's not In progress
. Incremental versions can have multiple deployments with the same prefix, but only one of them can have Preparing status. Only In progress deployments can't be deleted. After deploying an incremental version, the deployment will get one of the two statuses below:
Successful: successful deployments remain immutable. After successful deployments, you can roll back the previous version with the corresponding database.
New successful deployments turn previous ones that belong to older versions obsolete
. When a previous version gets rolled back, the deployment that belongs to it turns all the deployments coming after downgraded
.
Failed: if a deployment comes back failed, it can be mutable.
Versionless projects come without a version. The purpose of versionless projects is to reduce infrastructure maintenance overhead.
Use cases:
Messaging queues: RabbitMQ, MQTT, etc.
Proxies: Traefik, NGINX, etc.
Databases: Postgres, MySQL, etc.
Audit logs collect team activity. It lists executed actions, the users who initiated them and the time of when the actions happened.
Logs are assigned to teams.
Your user profile. One profile can belong to multiple teams.
Chat notifications help you to get informed about your team's activity in an instant. As of now 3 platforms are supported:
Discord,
Slack,
and Microsoft Teams.
You can get notifications about 4 events:
new Node added,
new Version created,
successful and failed deployments,
new teammate invited.
Before you start creating your Notifications, make sure you have the webhook ready. You can find how to create a webhook for , , and in their official documentations.
Step 1: Click 'Notifications' on the left side.
Step 2: Select one of the 3 supported chat platforms.
Step 3: Enter the following data:
name of your notification,
webhook URL.
Step 4: Set the toggle for 'Active' or 'Inactive' notifications. You can change this later to activate or inactivate the notifications.
Step 5: Click 'Save' on the top right.
Unchecked means image availibility checking is disabled. Any HTTP API V2 compatible registry can be added this way, but images and their tags cannot be browsed when assembling a project. You can add an image to a project from an unchecked registry by entering the image's exact name and when needed, the tag.
When you add images from an unchecked registry to a project, and the image's name doesn't match, the deployment will fail.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Unchecked Registry and switch the toggle under the URL field to ‘Private’.
Step 4: In the corresponding fields, enter the registry's URL.
Step 5: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select GitLab Registry type.
Step 4: Select if you'd like to add a group or a project.
Step 5: In the corresponding fields, enter:
Your GitLab username,
Your password or access token generated in GitLab with the steps documented . Select the read_api
and read_registry
scopes.
And your organization’s or your project's GitHub name. You can find either under their name in their main pages.
Step 6: Make sure the Self-managed GitLab toggle is off. If you select GitLab Registry type, SaaS should be set by default.
Step 7: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s Nname and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select GitLab Registry type.
Step 4: In the corresponding fields, enter:
your GitLab username,
your password or access token generated in GitLab with the steps documented . Select the read_api
and read_registry
scopes.
And your organization’s GitHub name.
Step 5: Turn on self-managed GitLab toggle.
Step 6: Enter GitLab Registry’s URL and the GitLab API URL without the https prefixes.
Step 7: Click ‘Save’ button on the top right.
Docker Hub is a public image library. You can add registries from Docker Hub with the following steps.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Docker Hub type.
Step 4: Enter the registry’s name or your username in Docker Hub in the ‘Organization name or username’ field.
Step 5: Click ‘Save’ button on the top right.
Cal.com is a meeting scheduler application. Users can set up its self-hosted stack.
Cal.com is awesome. We like it and use it everyday, people like it, too. Chances are, if you're here, you like it, as well. Even better that they provide self-hosted usage. But as some users pointed out on , self-hosting Cal.com is a challenging process. So, we turned it into a template for easy setup.
After the Node where you'd like to run Cal.com is , you can set it up by following the steps of deployments as documented .
cal-db (latest)
POSTGRES_PASSWORD
has to be specified.
cal-com (2.5.10)
DATABASE_URL
needs to contain POSTGRES_PASSWORD
's value in postgresql://cal-user:${
POSTGRES_PASSWORD
}@cal-db:5432/cal-db
for cal-db.
NEXTAUTH_SECRET
and CALENDSO_ENCRYPTION_KEY
needs to be specified. We recommend to generate these secrets.
If you have a node with Traefik enabled you can use http://cal.localhost
(or any other domain setup in the ingress settings) by setting NEXT_PUBLIC_WEBAPP_URL
to the public URL.
Once the deployment is successful, self-hosted Cal.com is ready to use at by default, as seen below.
Rolling versions are similar to versionless project in a way that they come with a version that can't be rolled back. They’re always mutable but contrary to incremental versions, they aren’t hierarchic and lack a version number.
Step 1: After picking the Rolling tag, click ‘Save’. You’ll be directed to the Project tab. Select the project again.
Step 2: Click ‘Add version’.
Step 3: Enter the rolling version's name and specify a changelog.
Step 4: Click 'Save'. You'll be directed to the board of versions of your versioned project.
Step 5: Click 'Images' button in the card that belongs to the version you'd like to assemble.
Step 6: Click 'Add image'.
Step 7: Select the Registry you want to add images from.
Step 8: Type the image’s name to filter images. Select the image by clicking on the checkbox next to it.
Step 9: Click ‘Add’.
Step 10: Pick the ‘Tag’ icon next to the bin icon in the actions column to pick a version of the image you selected in the previous step.
Now you can define environment configurations to the selected image. For further adjustments, click on the JSON tab where you can define other variables. Copy and paste it to another image when necessary. Learn more about Configuration management .
Step 11: Click ‘Add Image’ to add another image. Repeat until you have all the desired images included in your product.
Find out what's in the works on , or check out coming features and integrations we have in mind at the links below.
Send a feature request to or drop a line on our .
Minecraft needs no introduction. You can setup a Minecraft Server by following the steps of deployments as documented . Image is available with latest
tag as a template.
Once the deployment is successful, the server is ready to use at by default.
Join our community to discuss DevOps, Kubernetes or anything on our public .
You can find the project's GitHub repository . Feel free to contribute!
Check our to read posts written by our team of DevOps specialists to improve your processes.
Follow us on , , and to stay updated about the latest developments.
If you got any question or feedback, let us know at .
is used to how to build and deploy microservices-based applications using Google Cloud Provider (GCP). Google Microservices Demo allows users to demonstrate use of technologies like Kubernetes/GKE, Istio, Stackdriver, gRPC and OpenCensus. Users can quickly set Google Microservices Demo up on your infrastructure.
After the Node where you'd like to run Google Microservices Demo is , you can set it up by following the steps of deployments as documented .
Once the deployment is successful, Google Microservices Demo is ready to use at by default, as seen below.
Portainer is a popular Docker GUI for containerized applications. It supports Docker, Docker Swarm and Kubernetes orchestrated runtimes.
The most important difference between Portainer and dyrector.io is the feature-rich release management capability the latter offers for containerized applications. You're able to configure and deploy your application's containerized stack with dyrector.io.
While you can trigger multi-instance deployments with dyrector.io, Portainer offers single-node deployments.
Portainer is very useful for teams who would like to interact with containerized applications and their infrastructure in a simple manner. Its Community Edition leaves room for many Docker specific use cases, but some premium features are only available in the Business Edition which offers limited usage for free users.
dyrector.io offers more for teams who develop their own software and want to manage its versions as a containerized stack, and then deploy them to a single or multiple deployment targets with a single click. Self-managed dyrector.io is 100% open-source without usage limitations.
Step 1: On the Product tab, click ‘Add’ on top right.
Step 2: Enter the Product’s name.
Tip: You can write a description so your teammates can understand what’s the purpose of this Complex Product.
Step 3: Select Complex on the switch under description.
Step 4: Click ‘Save’. You’ll be directed to the Product tab. Select the Product you just created.
Step 5: Click ‘Add Version’.
Step 6: Enter a name and a changelog for the new version. At this point, you can choose whether you’d like to create a Rolling or an Incremental product.
Supports Docker & K8s
✅
✅
Continuous Deployments
✅
✅
Chat notifications
❌
✅
Multi-instance deployments
❌
✅
Open-source
Premium functions restricted
100% open-source
Version Type
Rolling
Incremental
❌
Rollbacks
❌
✅
❌
History
Previous version is overwritten
Image and configuration inheritance
❌
Ideal for
Nightly Versions
Production
Testing, Single-container stacks
Manually triggered deployments at the end of a CD pipeline. Related features: scheduled releases. Configure your release and schedule a deployment time for the designated environment when the deployment won't be disruptive to users.
Misconfiguration induced failures are very common, especially when you don't have to regularly deal with configuration variables. Add human factor to the equation and the chances of outages induced by faulty configurations increase significantly.
For this reason we're working on a feature that allows users to manage bundled configurations. This way you'll be able to treat your configurations in a templatized manner.
Unawareness of how one version of your image is different from another is a risky practice. To avoid this, the platform helps you create changelogs based on your commit messages with Conventional Commits service. Make sure developers working on your application leave meaningful commit messages, so everyone working on your project understands the details of each version.
Changelog generation can reduce knowledge gap between stakeholders, like developers who work every day on the project and outsiders who occasionally deal with the project, like decision-makers.
Besides the changelogs, you can also leave a comment related to nodes and projects on the platform which your teammates can see on the platform's UI.
Auto send changelogs: once the changelog is generated, you can send it via e-mail or chat, whichever your team uses to communicate. For further information, check the ChatOps section.
You can use HashiCorp’s Vault Integration for secrets management to keep your passwords and encryption keys secure. This means the platform compliments your DevSecOps practices. Your secrets will be securely stored by HashiCorp, which you can access through the platform.
HashiCorp Vault is the de facto tool of security, used by most organisations. We encourage our users to use it, as well.
General Secret Storage: store your secrets, including sensitive configurations, tokens, API keys. Query them by using vault read
.
Employee Credential Storage: instead of using sticky notes around your screen, store and distribute credentials in one place. Vault has an audit log mechanism, which lets you know who had access to one of the stored secrets. This simplifies monitoring which keys have been rolled or not.
API Key Generation for Scripts: generate temporary access keys for the duration of a script. The keys only exist for that duration and are logged by HashiCorp.
Data Encryption: aligning with the purpose of the platform, HashiCorp’s Vault enables developers to focus on developing. The Vault takes care of data encryption and decryption, instead of the developers and other technical staff on the team.
Role based access control allows you to manage the privileges of other users. This is an extra measure of security to your infrastructure and data. Using RBAC helps you to avoid situations when someone has unauthorised access to your account to execute harmful actions.
Principle of Least Privilege: consider the tasks of each user having access to your infrastructure. Only give them privileges necessary to complete their tasks, including modifying and managing configurations.
The platform's functionality implemented into ChatOps commands.
In case you decide to stop using the platform, you can generate YAML files to easily setup their services using another platform without using data.
You can inject files into a container using S3 storages by following the steps below.
What you'll need:
An S3 API compatible storage available
Step 1: Navigate to Storage on the platform and click the Add
button at the top right corner.
Step 2: Enter the following information to their respective fields:
Name
Your storage's URL
Access key
Secret key
Step 3: Click Save
.
Step 1: Navigate to your projects and select the project that has the container you'd like to inject files to.
Step 2: Click on the gear icon on the right side of the container.
Step 3: If it's turned off, turn on the Storage
filter in the sub filters.
Step 4: Select the storage you'd like to inject files from, and enter the bucket path.
You can specify a folder within a bucket, too. (Example: "bucket/folder/sub-folder")
It is possible to use container registries with self-signed (private) certificates. Be warned, it is an advanced, less convenient matter.
You can opt for using unchecked registries, that means images are not checked at all, image URLs are passed to the agent straight away. This way you can skip setting up certificates for the API.
The environment variable is NODE_EXTRA_CA_CERTS
, the process expects a concatenated list of your certificates. More info: https://nodejs.org/api/cli.html#cli_node_extra_ca_certs_file
Concatenating pem files to a single file:
cat *.cert.pem > node_extra_ca_certs
Mount the generated file using and provide the environment variable.
The two supported target nodes require different approaches.
Assumed the host already trusts the CA, when adding a node the install script is visible, you can use this script also to add extra behavior, if it is not explicitly implemented in the UI.
The component allows us to provide additional CA files using the SSL_CERT_FILE
variable, it expects one crt file that you have to mount from the host.
The last command of the script will be extended with two lines:
-e SSL_CERT_FILE=/path/to/ssl/ca.crt
-p /host/cert/path/ca.crt:/path/to/ssl/ca.crt
Make sure you use the correct values.
Using Kubernetes, you have to make sure that nodes already trust your CA: example.
You have to modify the install script to mount the crt file to crane.
You can create a secret from a file using this one liner, make sure it is in the dyrectorio namespace.
kubectl create secret generic my-cert --from-file=path/to/bar
Registries are where the images you'd like to deploy are collected. While there are multiple registries explicitly supported, all OCI registries are supported. You can add registries from the following sources:
V2 Registries are Docker Registry HTTP API V2 compatible. Both private and public registries are supported.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select V2 Registry and switch the toggle under the URL field to ‘Private’.
Step 4: In the corresponding fields, enter:
URL of your registry without the /v2 suffix,
username, and
the token or password which you use to access it.
Step 5: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an Icon below.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select V2 Registry type and switch the toggle under the URL field to ‘Public’.
Step 4: Enter the URL of your registry without the /v2 suffix.
Step 5: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Google Registry and switch the toggle under the URL field to ‘Private’.
Step 4: In the corresponding fields, enter the organization name. Upload the JSON key file you can generate as documented here. In the Google Cloud documentation you only need to follow instructions until the 2nd part.
Step 5: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon below.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Google Registry type and switch the toggle under the URL field to ‘Public’.
Step 4: Enter the Organization name.
Step 5: Click ‘Save’ button on the top right.
Versionless projects have only one abstracted-away version and cannot be rolled back. These are mostly useful for testing purposes or single-container stacks.
Step 1: On the Project tab, click ‘Add’ on top right.
Step 2: Enter the project’s name.
Tip: You can write a description so your teammates can understand what’s the purpose of this project.
Step 3: Select the versionless type under the description.
Step 4: Click ‘Save’. You’ll be directed to the Project tab. Select the project you just created.
Step 5: Click ‘Add Image’.
Step 6: Select the Registry you want to filter images from.
Step 7: Type the image’s name to filter images. Select the image by clicking on the checkbox next to it.
Step 8: Click ‘Add’.
Step 9: Click on the ‘Tag’ icon under the actions column left to the bin icon. This will allow you to select a version of the image you picked in the previous step.
You can define environment configurations to the selected image by clicking on the gear icon on the right. For further adjustments, click on the JSON tab where you can define other variables. Copy and paste it to another image when necessary. Learn more about Configuration management here.
Step 10: Click ‘Add Image’ to add another image. Repeat until you have all the desired images included in your product.
Function is still in the works, anomalies might occur.
After the CI/CD pipeline builds and pushes the image to a container registry, the pipeline triggers the deployment on the platform. The platform automatically signals to the agent that it should pull and start the image with the tag that already exists on the node.
Step 1. Create a versioned project.
Step 2. Add a rolling version to the project.
Step 3. Add images to the version.
Step 4. Add a deployment to the version.
Step 5. Click Create in the Deployment token card.
Step 6. Enter a name for your deployment token and set its expiration time, then click Create. You'll need to generate a new one when the token expires.
Step 7. Save the token somewhere secure as you won't be able to retrieve it later. Click Close when you're done.
Step 8. Paste the curl command into the pipeline.
Never store your token in your git repository. Use pipeline secrets instead.
You can revoke the token by clicking on the Revoke token button in the Deployment token card.
LinkAce is a self-hosted archive to collect links of your favorite websites. You can quickly set it up on your infrastructure. Template consists of MariaDB (10.7) and LinkAce image with simple
tag.
After the Node where you'd like to run LinkAce is registered, you can set it up by following the steps of deployments as documented here.
Requirements:
APP_KEY
environment variable must be 32 characters long.
After deployment, exec in the linkace-app container and add chmod 777
privileges to the \app\.env
file.
Once the deployment is successful, LinkAce is ready to use at localhost:6780 by default, as seen below.
File injection to containers is possible with the Storage function. It's S3 API compatible, as of now only Azure Blob Storage isn't supported.
Storage capabilities don't cover configuration backup storage. It's sole purpose is to offer a way for file injection.
We decided to go with S3 API compatible storages as it’s one of the most popular technologies that offer interoperability with a fair number of open-source projects. They represent a bunch of functions as flat structure file storages, including versioning, different types of access control, and so on.
One example of S3 API use cases is to upload the object via a REST endpoint and the object will be available through a simple URL.
Amazon's S3 solution isn’t open-source but a few S3 API compatible open-source implementations are listed below:
MinIO,
OpenIO,
Scality.
MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. The Docker image of MLflow is available with the 2.4.0
tag.
After the node where you'd like to run MLflow is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, MLflow is ready to use at http://localhost:5001/ by default, as seen below.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select GitHub Registry type.
Step 4: Select if you'd like to add an organization or a user.
Step 5: In the corresponding fields, enter:
Your GitHub user name,
Personal access token generated in GitHub with the steps documented here. Select the repo
and read:packages
scopes. Repo
scope is required due to GitHub's limitations. You're still able to add public GitHub registries, too.
And your organization’s or your user's GitHub name.
Only classic tokens are supported. Fine-grained tokens aren't supported yet.
Step 6: Click ‘Save’ button on the top right.
Gitea is a painless self-hosted Git service. It is similar to GitHub, Bitbucket, and GitLab. Image is available with latest
tag on dyrector.io.
After the Node where you'd like to run Gitea is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, Gitea is ready to use at http://localhost:3000/ by default, as seen below.
Strapi is a CMF that allows developers to build APIs in Node.js. It's mainly used to build content-driven applications or websites. You can quickly set it up on your infrastructure. Both images of Strapi are available with latest
tag.
After the Node where you'd like to run Strapi is registered, you can setup Strapi by following the steps of deployments as documented here.
Once the deployment is successful, Strapi is ready to use at localhost:1337 by default, as seen below.
Unfortunately we don't support applications that don't run in a containerized environment.
No. It can be configured with any environment, cloud or on-premises, Docker or Kubernetes. Read how you can .
Yes. Find the NGINX example .
Short: Generic configuration => Image, specific configuration => Container config, configuration is inherited from Image configuration.
Parameters that are generic and context independent should be defined on the Image level. Other, context dependent information, like an environment variable PUBLIC_URL="https://example.com" should be defined on the Deployment's level. During deployment there is a one-way merge using these configurations with Container configuration having the higher priority.
Unfortunately no, but there are settings you can use to disable Kratos. More details .
Yes.
Unfortunately there's no such capability within the platform now, but creating a similar functionality is in our plans.
The platform provides , but it doesn't offer CI, as of now. Such capabilities are in our long-term plans.
Unfortunately routing isn't managed by the platform itself. When you'd like to access your node or a deployed stack through a domain, you'll need to configure routing on your own.
dyrector.io is for anyone who works with containerized applications. That means organizations, or independent developers can gain advantage of the platform's functions.
Self-managed use will stay 100% free and unrestricted. But most of our team works full-time on the platform. While we gain some revenue as a cloud consultancy, we accept to fund the project.
We needed a solution for self-service release management capabilities for an entirely different project. We couldn't find one that fit exactly our needs, so we made our own platform.
Don’t hesitate, reach out to us at . Also drop us a mail to the same address if you find a bug or any other anomaly you experience. We’ll respond within 24 hours.
Send us an e-mail at or check in on .
Templates are presets of the most popular applications for quick setup.
Self-managed setup is only supported in Docker as of now, Kubernetes support is in the works.
Use .
The docker-compose below is only designed for demoing the platform, we don't suggest using it for any other use.
Use .
Use .
1 CPUs
8 GB RAM
Docker or Podman installed
Self-managed dyrector.io is free, unlimited, forever.
This freedom comes with a trade-off. While we'll still offer support on our server, we take no responsibility for maintaining your own instance of the platform and we won't prioritize giving support over other users. Make sure you use the latest version for a consistent experience.
While we won't put a cap on the number of nodes, deployments, projects you manage with your self-managed instance, as with every self-managed software, environment or database related problems might occur, which we take no responsibility for. For a seamless experience, give the a look.
Deployment is the key feature of the platform. It's the process of setting up the images on your node.
Step 1: Open the project or version you would like to deploy. To demonstrate the process, we used a versionless project.
Step 2: Click 'Add deployment'.
Step 3: Select the node in the 'Add deployment' block. After that, click 'Add' on the top right corner of the block.
Step 4: The images of the project will be listed. By clicking on the gear icon, you are able to define and adjust configuration variables. Learn more about Configuration management .
Step 5: Click 'Deploy'. If everything goes right, deployment status should switch from 'Preparing' to 'In progress'. When deployment's complete the status should turn 'Successful'.
You can see status change for each image on the 2nd picture below.
Deleting a deployment will only remove the containers from the platform. Infrastructure related data, including volumes and networks, will remain stored on the node.
When you can't deploy a version or a project because node status turns outdated, you should navigate to your node's edit page and update the agent by clicking the Update button.
Incremental versions are hierarchical in a way that they can have child versions and once a deployment is successful, the deployed versions, the environment variables, and the deployed images can never be modified. Because of this, you’re able to roll back the deployed incremental version and reinstate the last functional version.
Step 1: After picking the incremental tag, click ‘Save’. You’ll be directed to the incremental version’s preview. Click ‘Add Version’.
Step 2: Enter a name and a change log for the new version.
Step 3: Click ‘Save’. You’ll be redirected to the project's version board.
Step 4: To add images, click ‘Images’ and ‘Add Image’ on the next view.
Step 5: Select the Registry you want to filter images from.
Step 6: Type the image’s name to filter images. Select the image by clicking on the checkbox next to it.
Step 7: Click ‘Add’.
Step 8: Pick the ‘Tag’ icon next to the bin icon under the actions column to pick a version of the image you selected in the previous step.
Now you can define environment configurations to the selected image. For further adjustments, click on the JSON tab where you can define other variables. Copy and paste it to another image when necessary. Learn more about Configuration management .
Step 9: Click ‘Add Image’ to add another image. Repeat until you have all the desired images included in your product.
Step 1: Open Projects and select the versioned project you’d like to increase.
Step 2: Click ‘Increase’ button under the version of the project you’d like to add a new version to.
Step 3: Enter the version's name and add changelog. Click ‘Save’.
Step 4: Click 'Add image’ to search for images you’d like to include in the new version. If you’d like to remove an image from the previous version, click on the red trash icon.
Step 5: Pick the ‘Tag’, which is a version of the image you selected in the previous step.
Projects are the deployable units made up of images and their configuration you're going to manage through the platform.
Version Type
Rolling
Incremental
❌
Rollbacks
❌
✅
❌
History
Previous version overwritten
Image and configuration inheritance
❌
Ideal for
Nightly Versions
Production
Testing, Single-container stacks
The different project types have different deployment capabilities. For more details about the differences, check out the Components section.
Versionless projects have one abstracted-away version and cannot be rolled back. These are mostly useful for testing purposes.
Versioned projects have two types of versions: rolling and incremental. You can define one version per versioned project as default. That'll be the version future versioned projects will inherit images and configurations later until you set another one as default.
Rolling versions are similar to simple projects except they’re perfect for continuous delivery. They’re always mutable but contrary to incremental projects they aren’t hierarchic and lack a version number.
Incremental versions are hierarchical. They can have a child version and once a deployment is successful, the deployed versions, the environment variables, and the deployed images can never be modified. This guarantees you’re able to roll back the deployed version and reinstate the last functional one if any error occurs to avoid downtime.
Proxies provide secure connection when you set up the platform for self-managed use. But they can be useful for any other uses, when you need a firewall, or you'd like to hide your location, and so on.
When you set up the platform, we highly recommend you to use a proxy, such as Traefik or NGINX, to secure your network.
Traefik is used by default, as seen in the designed for production use.
By default we recommend using Traefik but if you already use NGINX then here's an example.
When you configure NGINX for the platform, keep in mind the following:
Inbound traffic needs to be directed towards 3 containers: kratos, crux-ui, and crux. The 5 locations defined are below:
/crux-ui
/kratos (needs to be stripped)
Locations routed to crux-ui:
/api/auth
/api/status
Locations routed to crux:
/api
upstream crux-ui {
server localhost:3000;
}
upstream crux {
server localhost:1848;
}
upstream kratos {
server localhost:4433;
}
server {
listen 80;
listen [::]:80;
server_name example.com;
client_max_body_size 128m;
proxy_read_timeout 300;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
ssl_certificate /etc/ssl/ssl.crt;
ssl_certificate_key /etc/ssl/ssl.key;
client_max_body_size 128m;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
location / {
proxy_pass http://crux-ui;
}
location /kratos {
rewrite ^/kratos(.*)$ /$1 break;
proxy_pass http://kratos;
}
location /api/auth {
proxy_pass http://crux-ui;
}
location /api/status {
proxy_pass http://crux-ui;
}
location /api {
proxy_pass http://crux;
}
}
Configurations can be all over the place without a single source of truth when left unmanaged for long periods of time. The more configurations you need to deal with, the more likely you’ll lose track of them. The platform can be used as single source of truth for all of your configurations, while being able to add, remove or modify configurations directly or via the JSON editor.
Every configuration you specify will remain stored after any modification or deletion to ensure you won’t have to spend time again defining already specified configurations.
Git repositories containing the configurations of your microservice architecture can be all over the place because one repo won’t cover all the configurations for all the images & components in your architecture. The platform substitutes Git repos by bringing every variable that belong to a specific Product in one place.
Think ahead: designing is the first step towards efficient and secure configuration management. Go through your organization’s structure, consider privileges and access points. This step is crucial for more efficient configuration management.
Configuration roll back: if it turns out the new configuration is faulty, you can roll back the last functioning ones.
Bundled configurations: instead of specifying the same configurations one by one to each component, you can apply variables to multiple components with one click by bundling them up.
You're able to define configurations for both images of a Project and Deployments. Variables that belong to images can be overwritten by deployment variables. You can also use sub filters to hide irrelevant variables to your configs. Below you can see all the variables for each filter – common, Kubernetes and Docker.
Common
Kubernetes
Docker
You're also able to customize your configuration in a JSON format, for easier copying.
The result should look like this:
{
// Container runtime user.
"user": null,
// Pseudo terminal allocation.
"tty": false,
// Maps an internal 80 to external 8080, external refers to the host's port.
"ports": [
{
"internal": 80,
"external": 8080
}
],
"portRanges": [],
"volumes": [],
// In Docker terminology, this is the equivalent of entrypoint.
//
// commands and args following one another will be the entrypoint, the actual
// starting parameter of the container.
"commands": [],
// Equals to CMD in Docker terminology.
"args": [],
// Possible variables can be none, expose, exposeWithTls.
"expose": null,
// When left undefined, the container's name by default. Domain suffix comes
// after. The two together makes up a rootable domain for the host.
"ingress": {
"name": "traefik",
"host": ""
},
// An OCI image containing assets or other artifacts, copied over pre-start.
"configContainer": null,
// Pull assets and artifacts from remote sources, such as object stores, remote
// services, and so on.
"importContainer": null,
// standard initContainers
"InitContainers": []InitContainer
// Docker only, see Docker documentation
// https://docs.docker.com/config/containers/logging/configure/
"logConfig": null,
// Standard variable. Can be always, none, unless_stopped, and so on.
"restartPolicy": "unless_stopped",
// Docker specific networkMode configuration.
"networkMode": "none",
// Existing networks to attach the container with.
"networks": [],
// Kubernetes only. See Kubernetes documentation.
"deploymentStrategy": "unknown",
// HTTP reverse proxy custom headers.
"customHeaders": [],
// Widely used headers in case of RESTful APIs.
"proxyHeaders": false,
// Kubernetes only. See Kubernetes documentation. Changes service
// type to loadbalancer.
"useLoadBalancer": false,
// Kubernetes only. See Kubernetes documentation. Custom
// loadbalancer annotations, eg. to define internal loadbalancer.
"extraLBAnnotations": null,
// Kubernetes only. See Kubernetes documentation.
"healthCheckConfig": null,
// Kubernetes only. See Kubernetes documentation.
"resourceConfig": null,
// Running application container name.
"name": "mysql",
// Environment variables used to configure application behavior.
// Tribute to https://12factor.net
"environment": {},
// WIP (not available yet) configure container behavior based on annotations.
"capabilities": {},
// Standard Docker labels. See Docker documentation.
"dockerLabels": {},
// Custom labels for each basic k8s component.
"labels": {
// Key value pairs for k8s deployments.
"deployment": null,
// Key value pairs for k8s service.
"service": null,
// Key value pairs for k8s ingress.
"ingress": null
},
// Custom annotations for each basic k8s component. See labels above,
// or k8s documentation.
"annotations": null
}
Self-managed GitLab is an open-source version of GitLab to manage your code. It's key advantage is that users can configure GitLab to their needs. You can quickly set it up on your infrastructure. Image comes with latest
tag in the template.
After the Node where you'd like to run GitLab is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, self-managed GitLab is ready to use at localhost:21080 by default.
Monitoring allows you to get instant feedback whether a deployment was successful or not, as seen below.
Besides deployment feedback, the platform is useful when you need to check up on your infrastructure and all you need to do is check the platform to get logs.
CLI is a tool for deploying a complete dyrector.io stack locally, for demonstration, testing, or development purposes.
Before using the CLI, make sure you have the following dependencies installed on your local machine:
Docker installed on your system but Podman works, too.
Go (1.20 or higher) to run the go install
.
Step 1: Execute go install github.com/dyrector-io/dyrectorio/golang/cmd/dyo@main
Step 2: Execute dyo up
Step 3: After you navigated to localhost:8000
(this is the default traefik port) you will see a Login screen
Step 4: Register an account with whatever e-mail address you see fit (doesn't have to be valid one)
Step 5: Navigate to localhost:4436
where you will find your mail as all outgoing e-mails will land here
Step 6: Open your e-mail message and using the link inside you can activate your account
Step 7: Happy deploying! 🎬
Step 1: By using command line – posix shells, Git Bash or PowerShell –, pull dyrector.io's GitHub repository by executing git pull
, if you don't already have the repository on your local machine you have to clone the repository by executing git clone https://github.com/dyrector-io/dyrectorio.git
.
Step 2: Open the project folder, and execute make up
– alias to go run ./golang/cmd/dyo up
– and wait until you get back the command prompt. It should take a few minutes the first time running, as it will pull a few docker images.
Step 3: Enter localhost:8000
in your browser's address bar. You're ready to use the platform.
The command-line interface (CLI) lets you run a complete the platform's development environment locally with the following services: UI Service (crux-ui), Backend Service (crux), PostgreSQL databases, Authentication, Migrations, and SMTP mock mail server. The CLI also runs the migration services. The default container names are listed below:
dyo-stable_traefik
dyo-stable_crux-postgres
dyo-stable_kratos-postgres
dyo-stable_kratos
dyo-stable_kratos-migrate
dyo-stable_crux
dyo-stable_crux-migrate
dyo-stable_mailslurper
dyo-stable_crux-ui
The dyo-stable prefix can be changed in the settings.yaml file of the application.
USAGE:
dyo [global options] command [command options] [arguments...]
COMMANDS:
up, u Run the stack
down, d Stop the stack
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--disable-crux, --dc disable crux(backend) service (default: false) [$DISABLE_CRUX]
--disable-crux-ui, --dcu disable crux-ui(frontend) service (default: false) [$DISABLE_CRUXUI]
--local-agent, --la will set crux env to make dagent connect to localhost instead of container network, it's useful when you use a non-containerized agent (default: false) [$LOCAL_AGENT]
--write, -w enables writing configuration, storing current state (default: false)
--debug enables debug messages (default: false)
--disable-forcepull try to use images locally available (default: false) [$DISABLE_FORCEPULL]
--disable-podman-checks disabling podman checks, useful when you run the CLI in a container (default: false) [$DISABLE_PODMAN_CHECKS]
--config value, -c value persisted configuration path (default: /home/sziszi/.config/dyo-cli/settings.yaml) [$DYO_CONFIG]
--imagetag value image tag, it will override the config [$DYO_IMAGE_TAG]
--prefix value, -p value prefix that is preprended to container names (default: dyo-stable) [$PREFIX]
--local-imagetag value special local image tag, CLI will try to find it and use it, otherwise it will fall back to config [$DYO_LOCAL_IMAGE_TAG]
--network value custom network, overriding the configuration [$DYO_NETWORK]
--expect-container-env when both the stack and observer are running inside containers, like during e2e tests (default: false) [$DYO_FULLY_CONTAINERIZED]
--silent, -s hides the welcome message and minimizes chattiness (default: false)
--help, -h show help
--version, -v print the version
As you seen above you can start the application with the up
subcommand, after you finished your work, you can stop and remove the containers with the down
subcommand.
Running the stack again without stopping it will result in containers stopped, removed then recreated.
The CLI generates a settings.yaml
file containing the default configurations if the program doesn't find a configuration on the given path, or a default path if there isn't. Default path is dependant on your OS, you can find these on:
Linux: $XDG_CONFIG_HOME/dyo-cli/settings.yaml
where the $XDG_CONFIG_HOME
usually resolving to $HOME/.config
.
Mac OSX: $HOME/Library/Application Support/dyo-cli/settings.yaml
.
Windows: %AppData%/dyo-cli/settings.yaml
.
The settings.yaml file contains the following:
# This option directly affects the docker images' tags.
# This applies to kratos, crux, and crux-ui. Currently we offer stable and latest tags.
version: latest
# The network's name where the containers will run. Agent will be installed here by
# agent's install script.
network-name: dyo-stable
# The following settings are mostly self-describing, you can see the default values
# here. If an option isn't available here, the CLI will use a predefined default,
# where the secrets will be a new 32 character long random string. The ports here
# will be exposed for your convenience, if theres any local address that is bound to a
# defined port here, feel free to change it, and restart the CLI.
options:
# Timezone of the application affects both crux and crux-ui.
timezone: UTC
crux-agentgrpc-port: 5000
crux-http-port: 1848
crux-ui-port: 3000
crux-secret: c8S9683U4eXHWr5ikqijZKAaQ89Lb9KJ
cruxPostgresPort: 5432
cruxPostgresDB: crux
cruxPostgresUser: crux
cruxPostgresPassword: rHNxQdtNQCMywNLAT07HtYKBOJw659rL
traefikWebPort: 8000
traefikUIPort: 8080
traefikDockerSocket: /var/run/docker.sock
# Necessary for Windows installations
traefikIsDockerSocketNamedPipe: false
kratosAdminPort: 4434
kratosPublicPort: 4433
kratosPostgresPort: 5433
kratosPostgresDB: kratos
kratosPostgresUser: kratos
kratosPostgresPassword: PHKZhJpdTcWO7KoKdkSVAJjvoiHum597
kratosSecret: A7ovxCOyBt8hbPfslcD37ZDcKGLKi8of
mailSlurperSMTPPort: 1025
mailSlurperWebPort: 4436
mailSlurperWebPort2: 4437
mailFromName: dyrector.io Platform
mailFromEmail: [email protected]
Please note, this file stores some state too, in this case passwords and secrets. These have to match to use the installation multiple times as with bad passwords you won't be able to update or use the databases.
To get usage tips and learn more about available commands from within CLI, run the following as we described above:
dyo help
If you have additional questions or ideas for new features, you can start an issue or a new discussion on our CLI’s open-source repository. You can also chat with our team on Discord.
We’d love to hear from you!
Registries are 3rd party registries where the images of versions are located. Learn more about registries .
There are two kinds of projects in dyrector.io: versionless and versioned. Versionless projects make up one deployable unit without versioning, while versioned projects come with multiple rolling or incremental versions. More details .
Versions belong to versioned projects. Versionless projects act similar to a rolling version of a versioned project.
The purpose of versions is to separate different variations of your project. They can be either rolling or incremental. One versionless project can have multiple versions of both types. More details about rolling and incremental versions .
Images make up a versioned project's version, or a versionless project.
Teams are the shared entity of multiple users. The purpose of teams is to separate users, nodes and projects based on their needs within an organization. Team owners can assign roles. More details about teams .
Users/Me cover endpoints related to your user profile.
Deployments are the process that gets the installation of your versions or versionless projects done on the node of your choice. More details about deployments .
Tokens are the access tokens that grant you access to a user profile and the teams the profile is a member of.
Nodes are the deployment targets. Nodes are registered by installing at least one of the agents – crane for Kubernetes, dagent for Docker. These agents connect the platform to your node. One team can have as many nodes as they like.
Node installation takes place with Shell or PowerShell scripts, which can be created or revoked. More details .
Audit log is a log of team activity generated by the platform.
Health refers to the status of the different services that make up the platform. It can be checked to see if the platform works properly.
Notifications are chat notifications in Slack, Discord, and Teams. They send an automated message about deployments, new versions, new nodes, and new users. More details .
Templates are preset applications that can be turned into a project right away. They can be deployed with minimal configuration. More details about templates .
Dashboard summarizes the latest activities of a team.
Storages are S3 compatible memory storages. They can be used for file injection. More details .
Introduced node connection improvements: node kick option is added, fixed an issue with updated agents getting kicked, and added an agent connection mechanism which will attempt connection with the stored token first, then the environment token if the first attempt fails. Delivered a fix to agents occasionally crashing when a container is deleted. Fixed deployment logs and container events logs overwriting each other. Added Rocket.Chat and Mattermost notification integrations. Added working directory as a container option. Integrated PostHog for quality assurance purposes (more details about it here) - tracking can be disabled. Private Docker Hub registries are now supported. Added healthchecks to CLI upon up
command. Added show icon feature to relevant pages. Minor fixes and improvements.
Shouts to chandhuDev for his contribution in this release.
More details about this release on GitHub.
Container settings (docker inspect) are now available in the platform. Updated deployment process screen with a progress bar. Container config fields are node type based now. Various fixes and updates: ory/kratos identity listing, unnecessary websocket error toasts, audit event filtering, key-value input error messages. Other fixes and improvements. Thanks to our Hacktoberfest contributors:
PapePathe
pedaars
GuptaPratik02
harshsinghcs
akash47angadi
More details about this release on GitHub.
Improved yup validation on the UI of the platform. Agent improvements: added updating
status to agent, update button is disabled when there's no update available, fixed an agent update issue when the agent stuck at an older version. Deployments are listed on the node detail page. Fixed a deployment issue when secrets are copied to a different node's deployment. Other fixes and improvements. More details about this release on GitHub.
Reworked agent connection handling to offer a more secure and stable user experience for node management. Added category labels to the platform's containers for better usability. Stack's Go toolchain is upgraded, deploymentStrategy is now utilized, and port routing is explicit. Implemented a fix for port range exposion. Minor fixes and improvements. More details about this release here.
Implemented two new capabilities: configuration bundles and protected deployments. Configuration bundles are configuration templates you can apply to other stacks you manage with the platform. Protected deployments prevent overwriting certain stacks on an infrastructure. Self-managed dyrector.io stack now pulls the latest image when a new version is available. Made several improvements to the UI of the platform: added deployment creation card, table sorting, and images are listed now on the page of a registry. We turned image and instance configuration settings more distinct from each other. Improved sign up and team validation workflow. Added MLflow template. Minor fixes and improvements. More details about this release here.
Added team slug to API endpoints. Implemented node check before deletion. Self-managed dyrector.io improvements: added HEALTCHECK directives to self-managed dyrector.io images, upgraded ory/kratos to 1.0 in dyrector.io stack. dagent improvements: host rule removed when no domain is given, unix socket based healthcheck. Configuration screen improvements: renamed ingress to routing in container configuration to simplify domain specification in config editor, swapped internal and external port inputs, port validation fixes. Made improvements to teams. Other fixes and improvements. More details about this release here.
Implemented fixes to dagent related issues, deployment token migration. UI improvements. More details about this release here.
Local images can be added as unchecked registry. Fixed a bug that prevented users from generating a CD token. Minor fixes. More details about this release here.
Made continuous deployment token improvements: name can be added to tokens for better usability, CD events show up in audit log, CD token has never expire option. Social sign-in is now available: GitHub, GitLab, Google, Azure. Fixed node edit bug. Made improvements to agent, onboarding checklist. Minor fixes and updates. More details about this release here.
Added deployment tokens to trigger CD pipelines. Versionless projects can be converted to versioned. You can select what images you'd like to deploy. Improved registry workflow. Added reload when Kratos isn't available. Small UI improvements. Minor fixes and updates. More details about this release here.
Added crane to signer image and add additional cache restore. More details about this release here.
Implemented principle of least privilege RBAC when managing a Kubernetes cluster through the platform. Improvements to node setup flow, container management, dagent registry auth. Minor fixes and improvements. More details about this release here.
The release includes a fix for minor versioning in the CI process and a change in the release script to incorporate the version of Golang components. More details about this release here.
Added onboarding checklist to the dashboard to guide users through deployment process. Automated multiarch builds to dyrector.io agent, including ARM. Renamed products to projects and their types: simple to versionless, complex to versioned. Improved audit logs with agent connectivity data. Rolling projects are now copyable. Fixes and improvements. More details about this release here.
Fixes and improvements to secrets, private V2 registry addition to the platform, and container view UI improvements. More details about this release here.
Minor fixes. More details about this release here.
We made various improvements to the project codebase, including adding an offline bundle Makefile target for offline development. We also enhanced the documentation by refactoring the README.md
file, including FAQs and a CLI docs link. Unused texts were removed, making the readme more concise. To improve usability, we enhanced the API descriptions and examples in the web documentation. We also introduced container config annotations for greater container configuration flexibility. Deployment management and tracking were improved with the implementation of deployment and event index functionalities.
We resolved a team invite captcha error and introduced code formatting for better readability. For logging and monitoring, we implemented HTTP and WebSocket audit log functionalities. Documentation organization was enhanced by adding .md files, and we improved the pull request labeling process. Title validation for pull requests was implemented, and OpenAPI descriptions and UUID parameter handling were validated. A PR labeler was added to automate labeling, and a deployment events API was introduced. In the UI
module, we made the signup page responsive and implemented reCAPTCHA for team invites to enhance security.
More details about this release here.
This version includes various bug fixes, template refinements, and updates to container IDs and UI status. Additionally, technology labels were added to templates and a demo video was introduced in the release.
More details about this release here.
This release includes a number of bug fixes and new features across various components of the software stack. Notably, the crane
component was fixed to use the original containerPreName
as a name for namespace, the web component now has a Gitea template and a PowerShell script, and the agent
component now has an option to not have CPU limits. The ci
component has also been updated with new e2e testing
and image building capabilities, as well as improved image signing and push to DockerHub. Other improvements include new templates and health checks, as well as updates to dependencies and minor UI fixes.
More details about this release here.
Agents - dagent and crane - support ARM powered nodes now. New templates are added: Gitea & Minecraft server. Minor fixes to deployments, JSON editor and mapper.
More details about this release here.
Dashboard is available to get a quick glance of the latest deployments and statistics. Templates are now available for 5 applications: WordPress, Strapi, self-managed GitLab, Google Microservices Demo, LinkAce. Configuration management improvements and fixes: previous secrets are reused, listing glitches of variables are fixed in filters. Improvements to dagent and crane: distroless image, abs path support for mounts, container builder extra hosts.
More details about this release here.
dagent updated: posix, mingw64 is supported now. dagent updates and deletes itself when new version of dagent is released. CLI improvements: verbose Docker & Podman error messages when CLI is setup. Config updates: common, Kubernetes & Docker variable filters & new variables (labels & annotations) added, unmatched secrets are invalidated, users can now mark secrets as required. Deployment updates: flow fixed, new capabilities (deletion & copy) available. Link navigation fixes in signup email. crane registry auth update.
Thanks to our Hacktoberfest contributors:
SilverTux (add unit test for crane clients),
joremysh (add unit tests for agent/internal/crypt),
oriapp (replaced all try-catch's with catch (err)),
tg44 (zerolog introduced),
raghav-rama (add new docker container builder attribute: shell),
minhoryang (crane init for private-key generate on k8s, crane init at k8s deployment for shared secret, goair for new watch tool and makefile modified),
clebs (Add unit tests for golang/internal/mapper)
659 files changed, 48182 insertions(+), 24050 deletions(-)
More details about this release here.
Fixed Prisma segfault.
More details about this release here.
CLI tool developed for quick local setup. Read here how to use CLI.
Container configuration & secret management improved. Google Container Registries are supported. Notifications are implemented for Discord, Slack and Microsoft Teams to notify teammates of new Nodes, Products, deployment statuses and new teammates. Google Microservices Demo is available with DemoSeeder implementation. Agent related improvements. E2E tested with Playwright. Improved Audit log with pagination and server side filtering. Status page available to check the statuses of the services of the platform. User facing documentation is available. Minor glitches and bugs fixed.
More details about this release here.
Vital fixes and cleanups. Extended & actualized unit tests in agent.
More details about this release here.
Migration into a monorepo on GitHub to measure up to open-source requirements. Automations and multiple platform support – Apple Silicon, Windows – is now available to provide a convenient developer experience. Agent's install script is added with MacOS support. Guidelines of contribution – code of conduct and README.
More details about this release here.
Our main goal is to provide you with fast and reliable open-source services. To achieve this, we collect performance data about the navigation and the application feature usage, and send anonymous usage statistics to our servers. This data helps us track how changes affect the performance and stability of our open-source service and identify potential issues.
We are fully transparent about what data we collect, why we collect it, and how it's transmitted. The source code for the telemetry package is open-source and can be found here. If you do not want to share telemetry data and help improve our projects, you can opt out of this feature.
To provide a clear understanding of why we collect data, how it's collected, and what we do with it, as well as real-world examples of how this data has improved our projects, let's break down the data processing pipeline:
Telemetry data is collected from the browser.
This data is periodically sent to eu.posthog.com
.
The data is stored and analyzed using the PostHog platform.
Our data processing pipeline has been designed with specific goals in mind:
To track the number of dyrector.io installs.
To understand which features are in use and how they are utilized.
To evaluate the frequency of specific feature usage.
To detect issues introduced by new features, such as buggy releases.
You have the option to disable quality assurance features, also known as telemetry, by using the environment variable QA_OPT_OUT=true
. Disabling telemetry doesn't have any drawbacks, except that it prevents us from making improvements to the project.
In order to safeguard your privacy, we take several measures to protect your data:
We are unable to access or store the IP address of your host or users. Our PostHog project's "Discard client IP data" is active, thus we are not able to identify who sent the data. You can find a comprehensive list of transmitted URL paths in the Request Telemetry section.
We do not transmit any environment information from the host except for:
Operating system (e.g., Windows, Linux, OSX)
Target architecture (e.g., amd64, darwin, ...)
Display dimensions
Browser data (e.g., Firefox, Chrome, English, browser version)
All this information is stored in an aggregated format without any personally identifiable data, ensuring your privacy is protected.
To facilitate the identification of installations each running instance is assigned a unique identifier generated using a Universally Unique Identifier (V4). This identification process is triggered when we are confident that the instance is not a test instance, such as a tutorial or a local installation.
The system metrics we collect include:
$os
: The operating system of the user's device.
$browser
: The web browser used by the user.
$device_type
: The type of device used, such as "Desktop" or "Mobile."
$browser_version
: The version of the web browser.
$browser_language
: The language of the web browser.
$screen_height
: The height of the user's screen in pixels.
$screen_width
: The width of the user's screen in pixels.
$viewport_height
: The height of the viewport in pixels.
$viewport_width
: The width of the viewport in pixels.
$lib
: This'll always be web.
$lib_version
: The type of PostHog library.
$insert_id
: An identifier associated with the insertion.
$time
: A timestamp associated with the event.
distinct_id
: A distinct identifier for the user or event.
$device_id
: The identifier associated with the user's device.
$groups
: A dictionary of group identifiers.
These measures allow us to effectively manage and group installations while maintaining data security and privacy.
The full code-base is open source.
{
"uuid": "018b8632-5d72-7461-95e1-985aaa8590e7",
"event": "select-chip",
"properties": {
"$os": "Linux",
"$browser": "Firefox",
"$device_type": "Desktop",
"$pathname": "/[teamSlug]/nodes",
"$browser_version": 119,
"$browser_language": "en-US",
"$screen_height": 1080,
"$screen_width": 1920,
"$viewport_height": 921,
"$viewport_width": 1093,
"$lib": "web",
"$lib_version": "1.85.2",
"$insert_id": "38hidmng7xk5jtet",
"$time": 1698763529.587,
"distinct_id": "018b860e-5987-79b1-ac1e-eaf008c67fcd",
"$device_id": "018b860e-5987-79b1-ac1e-eaf008c67fcd",
"$groups": {
"dyoInstance": "8c5c8b11-3c21-4d5a-b09b-fefb56647aaf"
},
"$console_log_recording_enabled_server_side": false,
"$session_recording_recorder_version_server_side": "v2",
"$autocapture_disabled_server_side": true,
"$active_feature_flags": [],
"$feature_flag_payloads": {},
"$referrer": "$direct",
"$referring_domain": "$direct",
"elementType": "button",
"label": "nodeType-docker",
"token": "phc_wc4nEUy7HYW8Elf8G9jiccvl2ZYt8pVts8NVBRMPzHu",
"$session_id": "018b860e-5988-748e-9880-ab38ffb8c284",
"$window_id": "018b860e-5988-748e-9880-ab39b3e577af",
"$set": {
"$os": "Linux",
"$browser": "Firefox",
"$device_type": "Desktop",
"$pathname": "/[teamSlug]/nodes",
"$browser_version": 119,
"$referrer": "$direct",
"$referring_domain": "$direct"
},
"$set_once": {
"$initial_os": "Linux",
"$initial_browser": "Firefox",
"$initial_device_type": "Desktop",
"$initial_pathname": "/[teamSlug]/nodes",
"$initial_browser_version": 119,
"$initial_referrer": "$direct",
"$initial_referring_domain": "$direct"
},
"$sent_at": "2023-10-31T14:45:29.624000+00:00",
"$group_0": "8c5c8b11-3c21-4d5a-b09b-fefb56647aaf"
},
"timestamp": "2023-10-31T14:45:35.050Z",
"team_id": 10816,
"distinct_id": "018b860e-5987-79b1-ac1e-eaf008c67fcd",
"elements_chain": "",
"created_at": "2023-10-31T14:45:35.132Z"
}
# This composition of compose files reflects the old one
COMPOSE_FILE=docker-compose.yaml:distribution/compose/docker-compose.traefik.yaml:distribution/compose/docker-compose.traefik-labels.yaml:distribution/compose/docker-compose.mail-test.yaml
# # Docker settings
# Traefik requires this file to be able to route the requests to the containers
DOCKER_SOCKET=/var/run/docker.sock
# # General
# Tag for images. It's stable by default
DYO_VERSION=stable
# Required for Traefik's certification resolution
# It should be your domain where dyrector.io will be available
DOMAIN=example.com
# Your server's timezone
TIMEZONE=UTC
# Required for Traefik's certification resolution
# If there's an issue with the certificate, or when it expires,
# letsencrypt will send a notification to this e-mail address
[email protected]
# NodeJS services can run in two modes: production and development
# These are the two values this key can have
NODE_ENV=production
# # Crux service settings
# You can specify how thorough logging will be
# Options: verbose, debug, info, warning, error
# The settings come in a hierarchic order, meaning that in the order above they contain each other
# Example: 'warning' contains 'error'
LOG_LEVEL=debug
# Secret key for encrypting stored credentials
# Can be generated using the CLI
# Example: docker run --rm ghcr.io/dyrector-io/dyrectorio/cli/dyo:latest generate crux encryption-key
ENCRYPTION_SECRET_KEY=Random_Generate_Key
# # Database passwords
# This value is the password to crux's database
CRUX_POSTGRES_PASSWORD=Random_Generated_String
# This value is the password to Kratos' database
KRATOS_POSTGRES_PASSWORD=Random_Generated_String
# This value is the password to root user
POSTGRES_ROOT_PASSWORD=Random_Generated_String
# # External URL of the site https://example.com(:port if not 443)
# This setting is to define where your
# self-managed dyrector.io will be available
EXTERNAL_PROTO=https
# # Cookie/JWT secrets
# Secret to sign JWTs.
CRUX_SECRET=Random_Generated_String
# Secret to sign Kratos cookies
# More details in Ory/Kratos documentation:
# https://www.ory.sh/docs/kratos/reference/configuration
KRATOS_SECRET=Random_Generated_String
# # Mailserver settings
# The connection string for the mail server
# The protocol can be SMTP or SMTPS
# Example: protocol://smtp_user:smtp_password@mailserver_ip_or_domain:port
SMTP_URI=smtps://username:[email protected]:465
# E-mail address for dyrector.io invitation links,
# password resets and others
[email protected]
# E-mail sender name for dyrector.io invitation links,
# password resets and others
FROM_NAME=dyrector.io
# # ReCAPTCHA secrets
# In case you don't want to use ReCAPTCHA set DISABLE_RECAPTCHA to true
# Highly recommended to keep the default value, which is `false`
DISABLE_RECAPTCHA=false
# Create ReCAPTCHA V2 credentials in the ReCAPTCHA admin console
# It is recommended to use the inivisble type
RECAPTCHA_SECRET_KEY=Recaptcha_Secret_Key
RECAPTCHA_SITE_KEY=Recaptcha_Site_Key
# To turn off Quality Assurance (default: false)
# more info: https://docs.dyrector.io/learn-more/quality-assurance-qa
# QA_OPT_OUT=true
# For providing a group identifier codename for the collected usage data
# QA_GROUP_NAME=
Lists every registries available in the active team. Request must include teamSlug
in URL. Response is an array including the name
, id
, type
, description
, and icon
of the registry.Registries are 3rd party registries where the container images are stored.
GET /api/{teamSlug}/registries HTTP/1.1
Host:
Accept: */*
[
{
"type": "v2",
"id": "text",
"name": "text",
"description": "text",
"icon": "text",
"url": "text"
}
]
To add a new registry, include teamSlug
in URL, body must include name
, type
, description
, details
, and icon
. Type
, details
, and name
are required. Response is an array including the name
, id
, type
, description
, imageNamePrefix
, inUse
, icon
, and audit log info of the registry.
POST /api/{teamSlug}/registries HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 102
{
"type": "group",
"details": {
"imageNamePrefix": "text"
},
"name": "text",
"description": "text",
"icon": "text"
}
{
"type": "group",
"details": {
"imageNamePrefix": "text"
},
"id": "text",
"name": "text",
"description": "text",
"icon": "text",
"inUse": true,
"createdAt": "2025-07-05T16:10:44.530Z",
"updatedAt": "2025-07-05T16:10:44.530Z"
}
Lists the details of a registry. Request must include teamSlug
and RegistryID
in URL. registryId
refers to the registry's ID. Response is an array including the name
, id
, type
, description
, imageNamePrefix
, inUse
, icon
, and audit log info of the registry.
GET /api/{teamSlug}/registries/{registryId} HTTP/1.1
Host:
Accept: */*
{
"type": "group",
"details": {
"imageNamePrefix": "text"
},
"id": "text",
"name": "text",
"description": "text",
"icon": "text",
"inUse": true,
"createdAt": "2025-07-05T16:10:44.530Z",
"updatedAt": "2025-07-05T16:10:44.530Z"
}
Modify the name
, type
, description
, details
, and icon
. RegistryId
refers to the registry's ID. teamSlug
and RegistryID
is required in URL, body must include type
, details
, and name
.
PUT /api/{teamSlug}/registries/{registryId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 102
{
"type": "group",
"details": {
"imageNamePrefix": "text"
},
"name": "text",
"description": "text",
"icon": "text"
}
No content
Deletes a registry with the specified registryId
. teamSlug
and RegistryID
are required in URL.
DELETE /api/{teamSlug}/registries/{registryId} HTTP/1.1
Host:
Accept: */*
No content
Returns a list of a team's projects and their details. teamSlug
needs to be included in URL.
GET /api/{teamSlug}/projects HTTP/1.1
Host:
Accept: */*
[
{
"type": "versionless",
"description": "text",
"versionCount": 1,
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"id": "text",
"name": "text"
}
]
Create a new project for a team. teamSlug
needs to be included in URL. Newly created team has a type
and a name
as required variables, and optionally a description
and a changelog
.
POST /api/{teamSlug}/projects HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 76
{
"type": "versionless",
"name": "text",
"description": "text",
"changelog": "text"
}
{
"type": "versionless",
"description": "text",
"versionCount": 1,
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"id": "text",
"name": "text"
}
Returns a project's details. teamSlug
and ProjectID
needs to be included in URL. The response should contain an array, consisting of the project's name
, id
, type
, description
, deletability
, versions and version related data, including version name
and id
, changelog
, increasibility.
GET /api/{teamSlug}/projects/{projectId} HTTP/1.1
Host:
Accept: */*
{
"type": "versionless",
"description": "text",
"deletable": true,
"versions": [
{
"type": "incremental",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"changelog": "text",
"default": true,
"increasable": true,
"id": "text",
"name": "text"
}
],
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"id": "text",
"name": "text"
}
Updates a project. teamSlug
is required in URL, as well as projectId
to identify which project is modified, name
, description
and changelog
can be adjusted with this call.
PUT /api/{teamSlug}/projects/{projectId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 55
{
"name": "text",
"description": "text",
"changelog": "text"
}
No content
Deletes a project with the specified projectId
. teamSlug
and ProjectID
are required in URL.
DELETE /api/{teamSlug}/projects/{projectId} HTTP/1.1
Host:
Accept: */*
No content
Converts a project to versioned with the specified projectId
. teamSlug
and ProjectID
are required in URL.
POST /api/{teamSlug}/projects/{projectId}/convert HTTP/1.1
Host:
Accept: */*
No content
Returns an array containing the every version that belong to a project. teamSlug
and ProjectId
must be included in URL. ProjectId
refers to the project's ID. Details include the version's name
, id
, type
, audit
log details, changelog
, and increasibility.
GET /api/{teamSlug}/projects/{projectId}/versions HTTP/1.1
Host:
Accept: */*
[
{
"type": "incremental",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"changelog": "text",
"default": true,
"increasable": true,
"id": "text",
"name": "text"
}
]
Creates a new version in a project. projectId
refers to the project's ID. teamSlug
and ProjectId
must be included in URL, request's body need to include name
and type
of the version, changelog
is optionable. Response should include the name
, id
, changelog
, increasibility, type
, and audit
log details of the version.
POST /api/{teamSlug}/projects/{projectId}/versions HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 55
{
"type": "incremental",
"name": "text",
"changelog": "text"
}
{
"type": "incremental",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"changelog": "text",
"default": true,
"increasable": true,
"id": "text",
"name": "text"
}
Returns the details of a version in the project. teamSlug
and ProjectId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID. Details include the version's name
, id
, type
, audit
log details, changelog
, increasibility, mutability, deletability, and all image related data, including name
, id
, tag
, order
and configuration data of the images.
GET /api/{teamSlug}/projects/{projectId}/versions/{versionId} HTTP/1.1
Host:
Accept: */*
{
"type": "incremental",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"changelog": "text",
"default": true,
"increasable": true,
"id": "text",
"name": "text",
"mutable": true,
"deletable": true,
"images": [
{
"id": "text",
"name": "text",
"tag": "text",
"order": 1,
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
},
"createdAt": "2025-07-05T16:10:44.530Z",
"registry": {
"type": "v2",
"id": "text",
"name": "text"
}
}
],
"deployments": [
{
"status": "preparing",
"note": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"node": {
"type": "docker",
"status": "unreachable",
"id": "text",
"name": "text"
},
"id": "text",
"prefix": "text"
}
]
}
Updates a version's name
and changelog
. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID.
PUT /api/{teamSlug}/projects/{projectId}/versions/{versionId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 34
{
"name": "text",
"changelog": "text"
}
No content
This call deletes a version. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID.
DELETE /api/{teamSlug}/projects/{projectId}/versions/{versionId} HTTP/1.1
Host:
Accept: */*
No content
This call turns a version into the default one, resulting other versions within this project later inherit images, deployments and their configurations from it. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID.
PUT /api/{teamSlug}/projects/{projectId}/versions/{versionId}/default HTTP/1.1
Host:
Accept: */*
No content
Increases the version of a project with a new child version. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID. name
refers to the name of the new version, and is required in the body.
POST /api/{teamSlug}/projects/{projectId}/versions/{versionId}/increase HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 34
{
"name": "text",
"changelog": "text"
}
{
"type": "incremental",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"changelog": "text",
"default": true,
"increasable": true,
"id": "text",
"name": "text"
}
Fetch details of images within a version. ProjectId
refers to the project's ID, versionId
refers to the version's ID. Both, and teamSlug
are required in the URL.Details come in an array, including name
, id
, tag
, order
, and config details of the image.
GET /api/{teamSlug}/projects/{projectId}/versions/{versionId}/images HTTP/1.1
Host:
Accept: */*
[
{
"id": "text",
"name": "text",
"tag": "text",
"order": 1,
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
},
"createdAt": "2025-07-05T16:10:44.530Z",
"registry": {
"type": "v2",
"id": "text",
"name": "text"
}
}
]
Add new images to a version. projectId
refers to the project's ID, versionId
refers to the version's ID. These, and teamSlug
are required in the URL. registryId
refers to the registry's ID, images
refers to the name(s) of the images you'd like to add. These are required variables in the body.
POST /api/{teamSlug}/projects/{projectId}/versions/{versionId}/images HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 41
[
{
"registryId": "text",
"images": [
"text"
]
}
]
[
{
"id": "text",
"name": "text",
"tag": "text",
"order": 1,
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
},
"createdAt": "2025-07-05T16:10:44.530Z",
"registry": {
"type": "v2",
"id": "text",
"name": "text"
}
}
]
Fetch details of an image within a version. projectId
refers to the project's ID, versionId
refers to the version's ID, imageId
refers to the image's ID. All, and teamSlug
are required in the URL.Image details consists name
, id
, tag
, order
, and the config of the image.
GET /api/{teamSlug}/projects/{projectId}/versions/{versionId}/images/{imageId} HTTP/1.1
Host:
Accept: */*
{
"id": "text",
"name": "text",
"tag": "text",
"order": 1,
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
},
"createdAt": "2025-07-05T16:10:44.530Z",
"registry": {
"type": "v2",
"id": "text",
"name": "text"
}
}
Delete an image. projectId
refers to the project's ID, versionId
refers to the version's ID, imageId
refers to the image's ID. All, and teamSlug
are required in the URL.
DELETE /api/{teamSlug}/projects/{projectId}/versions/{versionId}/images/{imageId} HTTP/1.1
Host:
Accept: */*
No content
Modify the configuration variables of an image. projectId
refers to the project's ID, versionId
refers to the version's ID, imageId
refers to the image's ID. All, and teamSlug
are required in the URL. Tag
refers to the version of the image, config
is an object of configuration variables.
PATCH /api/{teamSlug}/projects/{projectId}/versions/{versionId}/images/{imageId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 2029
{
"tag": "text",
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
}
}
No content
Edit image deployment order of a version. projectId
refers to the project's ID, versionId
refers to the version's ID. Both, and teamSlug
are required in the URL. Request body should include the IDs of the images in an array.
PUT /api/{teamSlug}/projects/{projectId}/versions/{versionId}/images/order HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 8
[
"text"
]
No content
List of teams consist of name
, id
, and statistics
, including number of users
, projects
, nodes
, versions
, and deployments
.Teams are the shared entity of multiple users. The purpose of teams is to separate users, nodes and projects based on their needs within an organization. Team owners can assign roles. More details about teams here.
GET /api/teams HTTP/1.1
Host:
Accept: */*
[
{
"statistics": {
"users": 1,
"projects": 1,
"nodes": 1,
"versions": 1,
"deployments": 1
},
"id": "text",
"name": "text",
"slug": "text"
}
]
Request must include name
, which is going to be the name of the newly made team. Response should include name
, id
, and statistics
, including number of users
, projects
, nodes
, versions
, and deployments
.
POST /api/teams HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 29
{
"slug": "text",
"name": "text"
}
{
"statistics": {
"users": 1,
"projects": 1,
"nodes": 1,
"versions": 1,
"deployments": 1
},
"id": "text",
"name": "text",
"slug": "text"
}
Get the details of a team. Request must include teamId
, which is the ID of the team they'd like to get the data of. Data of teams consist of name
, id
, and statistics
, including number of users
, projects
, nodes
, versions
, and deployments
. Response should include user details, as well, including name
, id
, role
, status
, email
, and lastLogin
.
GET /api/teams/{teamId} HTTP/1.1
Host:
Accept: */*
{
"statistics": {
"users": 1,
"projects": 1,
"nodes": 1,
"versions": 1,
"deployments": 1
},
"id": "text",
"name": "text",
"slug": "text",
"users": [
{
"role": "owner",
"status": "pending",
"email": "text",
"lastLogin": "2025-07-05T16:10:44.530Z",
"id": "text",
"name": "text"
}
]
}
Request must include teamId
and name
. Admin access required for a successful request.
PUT /api/teams/{teamId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 29
{
"slug": "text",
"name": "text"
}
No content
Request must include teamId
, email and firstName
. Admin access required for a successful request.Response should include new user's name
, id
, role
, status
, email
, and lastLogin
. Admin access required for a successful request.
POST /api/teams/{teamId}/users HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 70
{
"email": "text",
"firstName": "text",
"lastName": "text",
"captcha": "text"
}
{
"role": "owner",
"status": "pending",
"email": "text",
"lastLogin": "2025-07-05T16:10:44.530Z",
"id": "text",
"name": "text"
}
Promotes or demotes the user. Request must include teamId
, userId
and role
. Admin access required for a successful request.
PUT /api/teams/{teamId}/users/{userId}/role HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 16
{
"role": "owner"
}
No content
Removes the current user from the team. Request must include teamId
.
DELETE /api/teams/{teamId}/users/leave HTTP/1.1
Host:
Accept: */*
No content
Removes the user from the team. Request must include teamId
, userId
. Admin access required for a successful request.
DELETE /api/teams/{teamId}/users/{userId} HTTP/1.1
Host:
Accept: */*
No content
This call sends a new invitation link to a user who hasn't accepted invitation to a team.Request must include teamId
, userId
. Admin access required for a successful request.
POST /api/teams/{teamId}/users/{userId}/reinvite HTTP/1.1
Host:
Accept: */*
No content
Response includes the user
, teams
, and invitations
.
POST /api/users/me HTTP/1.1
Host:
Accept: */*
{
"user": {
"id": "text",
"name": "text"
},
"teams": [
{
"role": "owner",
"id": "text",
"name": "text",
"slug": "text"
}
],
"invitations": [
{
"id": "text",
"name": "text",
"slug": "text"
}
]
}
Get the list of deployments. Request needs to include teamSlug
in URL. A deployment should include id
, prefix
, status
, note
, audit
log details, project name
, id
, type
, version name
, type
, id
, and node name
, id
, type
.
GET /api/{teamSlug}/deployments HTTP/1.1
Host:
Accept: */*
[
{
"status": "preparing",
"note": "text",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"project": {
"type": "versionless",
"id": "text",
"name": "text"
},
"version": {
"type": "incremental",
"id": "text",
"name": "text"
},
"node": {
"type": "docker",
"id": "text",
"name": "text"
},
"id": "text",
"prefix": "text"
}
]
Request must include teamSlug
in URL, versionId
, nodeId
, and prefix
, which refers to the ID of a version, a node and the prefix of the deployment, must be included in body. Response should include deployment id
, prefix
, status
, note
, and audit
log details, as well as project type
, id
, name
, version type
, id
, name
, and node type
, id
, name
.
POST /api/{teamSlug}/deployments HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 66
{
"versionId": "text",
"nodeId": "text",
"prefix": "text",
"note": "text"
}
{
"status": "preparing",
"note": "text",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"project": {
"type": "versionless",
"id": "text",
"name": "text"
},
"version": {
"type": "incremental",
"id": "text",
"name": "text"
},
"node": {
"type": "docker",
"id": "text",
"name": "text"
},
"id": "text",
"prefix": "text"
}
Get details of a certain deployment. Request must include teamSlug
and deploymentId
in URL. Deployment details should include id
, prefix
, environment
, status
, note
, audit
log details, project name
, id
, type
, version name
, type
, id
, and node name
, id
, type
.
GET /api/{teamSlug}/deployments/{deploymentId} HTTP/1.1
Host:
Accept: */*
{
"status": "preparing",
"note": "text",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"project": {
"type": "versionless",
"id": "text",
"name": "text"
},
"version": {
"type": "incremental",
"id": "text",
"name": "text"
},
"node": {
"type": "docker",
"id": "text",
"name": "text"
},
"id": "text",
"prefix": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"publicKey": "text",
"instances": [
{
"id": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"image": {
"id": "text",
"name": "text",
"tag": "text",
"order": 1,
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
},
"createdAt": "2025-07-05T16:10:44.530Z",
"registry": {
"type": "v2",
"id": "text",
"name": "text"
}
},
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"secrets": [
{
"required": true,
"id": "text",
"key": "text",
"value": "text",
"encrypted": true,
"publicKey": "text"
}
],
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
}
}
],
"lastTry": 1,
"token": {
"id": "text",
"name": "text",
"createdAt": "2025-07-05T16:10:44.530Z",
"expiresAt": "2025-07-05T16:10:44.530Z"
}
}
Request must include teamSlug
and deploymentId
in the URL.
DELETE /api/{teamSlug}/deployments/{deploymentId} HTTP/1.1
Host:
Accept: */*
No content
Request must include deploymentId
and teamSlug
in URL.
PATCH /api/{teamSlug}/deployments/{deploymentId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 89
{
"note": "text",
"prefix": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
No content
Request must include teamSlug
, deploymentId
and instanceId
, which refer to the ID of a deployment and the instance, in the URL. Instances are the manifestation of an image in the deployment. Response should include state
, id
, updatedAt
, and image
details including id
, name
, tag
, order
and config
variables.
GET /api/{teamSlug}/deployments/{deploymentId}/instances/{instanceId} HTTP/1.1
Host:
Accept: */*
{
"id": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"image": {
"id": "text",
"name": "text",
"tag": "text",
"order": 1,
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"secrets": [
{
"required": true,
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
},
"createdAt": "2025-07-05T16:10:44.530Z",
"registry": {
"type": "v2",
"id": "text",
"name": "text"
}
},
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"secrets": [
{
"required": true,
"id": "text",
"key": "text",
"value": "text",
"encrypted": true,
"publicKey": "text"
}
],
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
}
}
Request must include teamSlug
, deploymentId
, instanceId
in URL, and portion of the instance configuration as config
in the body. Response should include config
variables in an array.
PATCH /api/{teamSlug}/deployments/{deploymentId}/instances/{instanceId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 2067
{
"config": {
"expose": "none",
"restartPolicy": "always",
"networkMode": "none",
"deploymentStrategy": "recreate",
"secrets": [
{
"required": true,
"id": "text",
"key": "text",
"value": "text",
"encrypted": true,
"publicKey": "text"
}
],
"name": "text",
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"routing": {
"domain": "text",
"path": "text",
"stripPath": true,
"uploadLimit": "text"
},
"user": 1,
"tty": true,
"configContainer": {
"image": "text",
"volume": "text",
"path": "text",
"keepFiles": true
},
"ports": [
{
"id": "text",
"internal": 1,
"external": 1
}
],
"portRanges": [
{
"id": "text",
"internal": {
"from": 1,
"to": 1
},
"external": {
"from": 1,
"to": 1
}
}
],
"volumes": [
{
"type": "ro",
"id": "text",
"name": "text",
"path": "text",
"size": "text",
"class": "text"
}
],
"commands": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"initContainers": [
{
"id": "text",
"name": "text",
"image": "text",
"command": [
{
"id": "text",
"key": "text"
}
],
"args": [
{
"id": "text",
"key": "text"
}
],
"environment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"useParentConfig": true,
"volumes": [
{
"id": "text",
"name": "text",
"path": "text"
}
]
}
],
"capabilities": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"storage": {
"storageId": "text",
"path": "text",
"bucket": "text"
},
"logConfig": {
"driver": "nodeDefault",
"options": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"networks": [
{
"id": "text",
"key": "text"
}
],
"dockerLabels": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"customHeaders": [
{
"id": "text",
"key": "text"
}
],
"proxyHeaders": true,
"useLoadBalancer": true,
"extraLBAnnotations": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"healthCheckConfig": {
"port": 1,
"livenessProbe": "text",
"readinessProbe": "text",
"startupProbe": "text"
},
"resourceConfig": {
"limits": {
"cpu": "text",
"memory": "text"
},
"requests": {
"cpu": "text",
"memory": "text"
}
},
"annotations": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
},
"labels": {
"service": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"deployment": [
{
"value": "text",
"id": "text",
"key": "text"
}
],
"ingress": [
{
"value": "text",
"id": "text",
"key": "text"
}
]
}
}
}
No content
Request must include teamSlug
, deploymentId
and instanceId
, which refer to the ID of a deployment and the instance, needs to be included in URL. Response should include container prefix
and name
, and publicKey
, keys
.
GET /api/{teamSlug}/deployments/{deploymentId}/instances/{instanceId}/secrets HTTP/1.1
Host:
Accept: */*
{
"container": {
"prefix": "text",
"name": "text"
},
"publicKey": "text",
"keys": [
"text"
]
}
Request must include teamSlug
and deploymentId
in the URL.
POST /api/{teamSlug}/deployments/{deploymentId}/start HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 22
{
"instances": [
"text"
]
}
No content
Request must include teamSlug
and deploymentId
in the URL, which will be copied. The body must include the nodeId
, prefix
and optionally a note
. Response should include deployment data: id
, prefix
, status
, note
, and miscellaneous details of audit
log, project
, version
, and node
.
POST /api/{teamSlug}/deployments/{deploymentId}/copy HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 47
{
"nodeId": "text",
"prefix": "text",
"note": "text"
}
{
"status": "preparing",
"note": "text",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"project": {
"type": "versionless",
"id": "text",
"name": "text"
},
"version": {
"type": "incremental",
"id": "text",
"name": "text"
},
"node": {
"type": "docker",
"id": "text",
"name": "text"
},
"id": "text",
"prefix": "text"
}
Request must include teamSlug
and deploymentId
in the URL. Response should include an items
array with objects of type
, deploymentStatus
, createdAt
, log
, and containerState
which consists of state
and instanceId
.
GET /api/{teamSlug}/deployments/{deploymentId}/log?skip=1&take=1 HTTP/1.1
Host:
Accept: */*
{
"items": [
{
"type": "log",
"deploymentStatus": "preparing",
"createdAt": "2025-07-05T16:10:44.530Z",
"log": [
"text"
],
"containerState": {
"state": "running",
"instanceId": "text"
}
}
],
"total": 1
}
Request must include teamSlug
and deploymentId
in the URL. In the body a name
and optionally the expiration date as expirationInDays
.
PUT /api/{teamSlug}/deployments/{deploymentId}/token HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 36
{
"name": "text",
"expirationInDays": 1
}
{
"id": "text",
"name": "text",
"createdAt": "2025-07-05T16:10:44.530Z",
"expiresAt": "2025-07-05T16:10:44.530Z",
"token": "text",
"curl": "text"
}
Request must include teamSlug
and deploymentId
in the URL.
DELETE /api/{teamSlug}/deployments/{deploymentId}/token HTTP/1.1
Host:
Accept: */*
No content
Access token's support is to provide secure access to the HTTP api without a cookie.
GET /api/tokens HTTP/1.1
Host:
Accept: */*
[
{
"id": "text",
"name": "text",
"expiresAt": "2025-07-05T16:10:44.530Z",
"createdAt": "2025-07-05T16:10:44.530Z"
}
]
Request must include name
and expirationInDays
.
POST /api/tokens HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 36
{
"name": "text",
"expirationInDays": 1
}
{
"id": "text",
"name": "text",
"expiresAt": "2025-07-05T16:10:44.530Z",
"createdAt": "2025-07-05T16:10:44.530Z",
"token": "text"
}
Access token's details are name
, id
, and the time of creation and expiration. Request must include tokenId
.
GET /api/tokens/{tokenId} HTTP/1.1
Host:
Accept: */*
{
"id": "text",
"name": "text",
"expiresAt": "2025-07-05T16:10:44.530Z",
"createdAt": "2025-07-05T16:10:44.530Z"
}
Fetch data of deployment targets. Request must include teamSlug
in URL. Response should include an array with the node's type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, and name
.
GET /api/{teamSlug}/nodes HTTP/1.1
Host:
Accept: */*
[
{
"type": "docker",
"status": "unreachable",
"description": "text",
"icon": "text",
"address": "text",
"connectedAt": "2025-07-05T16:10:44.530Z",
"version": "text",
"updating": true,
"id": "text",
"name": "text"
}
]
Request must include the teamSlug
in URL, and node's name
in body. Response should include an array with the node's type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, and name
.
POST /api/{teamSlug}/nodes HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 50
{
"name": "text",
"description": "text",
"icon": "text"
}
{
"type": "docker",
"status": "unreachable",
"description": "text",
"icon": "text",
"address": "text",
"connectedAt": "2025-07-05T16:10:44.530Z",
"version": "text",
"updating": true,
"id": "text",
"name": "text"
}
Fetch data of a specific node. Request must include teamSlug
in URL, and nodeId
in body. Response should include an array with the node's type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, name
, hasToken
, and agent installation details.
GET /api/{teamSlug}/nodes/{nodeId} HTTP/1.1
Host:
Accept: */*
{
"type": "docker",
"status": "unreachable",
"description": "text",
"icon": "text",
"address": "text",
"connectedAt": "2025-07-05T16:10:44.530Z",
"version": "text",
"updating": true,
"id": "text",
"name": "text",
"hasToken": true,
"install": {
"command": "text",
"script": "text",
"expireAt": "2025-07-05T16:10:44.530Z"
},
"inUse": true
}
Request must include the teamSlug
in URL, and node's name
in body, body can include description
and icon
.
PUT /api/{teamSlug}/nodes/{nodeId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 50
{
"name": "text",
"description": "text",
"icon": "text"
}
No content
Request must include the teamSlug
in URL, and node's name
in body.
DELETE /api/{teamSlug}/nodes/{nodeId} HTTP/1.1
Host:
Accept: */*
No content
Request must include the teamSlug
in URL, and node's name
in body. Response should include type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, name
, hasToken
, and install
details.
GET /api/{teamSlug}/nodes/{nodeId}/script HTTP/1.1
Host:
Accept: */*
text
Request must include teamSlug
in URL and nodeId
, type
, and scriptType
.
POST /api/{teamSlug}/nodes/{nodeId}/script HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 93
{
"type": "docker",
"scriptType": "shell",
"rootPath": "text",
"dagentTraefik": {
"acmeEmail": "text"
}
}
{
"command": "text",
"script": "text",
"expireAt": "2025-07-05T16:10:44.530Z"
}
Request must include the teamSlug
in URL, and node's name
in body.
DELETE /api/{teamSlug}/nodes/{nodeId}/script HTTP/1.1
Host:
Accept: */*
No content
Request must include the teamSlug
in URL, and node's name
in body.
DELETE /api/{teamSlug}/nodes/{nodeId}/token HTTP/1.1
Host:
Accept: */*
No content
Request must include the teamSlug
in URL, and node's name
in body.
POST /api/{teamSlug}/nodes/{nodeId}/update HTTP/1.1
Host:
Accept: */*
No content
Request must include teamSlug
in URL, and its body must include skip
, take
, and dates of from
and to
. Response should include an array of items
: createdAt
date, event
, and data
.
GET /api/{teamSlug}/nodes/{nodeId}/audit?skip=1&take=1&from=2025-07-05T16%3A10%3A44.530Z&to=2025-07-05T16%3A10%3A44.530Z HTTP/1.1
Host:
Accept: */*
{
"items": [
{
"createdAt": "2025-07-05T16:10:44.530Z",
"event": "text",
"data": {}
}
],
"total": 1
}
Request must include nodeId
, prefix
, and name
.
POST /api/{teamSlug}/nodes/{nodeId}/{prefix}/containers/{name}/start HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, prefix
, and name
.
POST /api/{teamSlug}/nodes/{nodeId}/{prefix}/containers/{name}/stop HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, prefix
, and name
.
POST /api/{teamSlug}/nodes/{nodeId}/{prefix}/containers/{name}/restart HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, and prefix
.
DELETE /api/{teamSlug}/nodes/{nodeId}/{prefix}/containers HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, prefix
, and name
.
DELETE /api/{teamSlug}/nodes/{nodeId}/{prefix}/containers/{name} HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
and prefix
. Response should include id
, command
, createdAt
, state
, status
, imageName
, imageTag
and ports
of images.
GET /api/{teamSlug}/nodes/{nodeId}/containers?prefix=text HTTP/1.1
Host:
Accept: */*
[
{
"state": "running",
"id": {
"prefix": "text",
"name": "text"
},
"command": "text",
"createdAt": "2025-07-05T16:10:44.530Z",
"reason": "text",
"imageName": "text",
"imageTag": "text",
"ports": [
{
"internal": 1,
"external": 1
}
]
}
]
Request must include nodeId
, and the name
of the container.
POST /api/{teamSlug}/nodes/{nodeId}/containers/{name}/start HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, and the name
of the container.
POST /api/{teamSlug}/nodes/{nodeId}/containers/{name}/stop HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, and the name
of the container.
POST /api/{teamSlug}/nodes/{nodeId}/containers/{name}/restart HTTP/1.1
Host:
Accept: */*
No content
Request must include nodeId
, and the name
of the container.
DELETE /api/{teamSlug}/nodes/{nodeId}/containers/{name} HTTP/1.1
Host:
Accept: */*
No content
Request must include skip
, take
, and dates of from
and to
. Response should include an array of items
: createdAt
date, userId
, email
, serviceCall
, and data
.
GET /api/{teamSlug}/audit-log?skip=1&take=1&from=2025-07-05T16%3A10%3A44.530Z&to=2025-07-05T16%3A10%3A44.530Z HTTP/1.1
Host:
Accept: */*
{
"items": [
{
"actorType": "get",
"context": "http",
"method": "get",
"createdAt": "2025-07-05T16:10:44.530Z",
"user": {
"id": "text",
"email": "text"
},
"name": "text",
"event": "text",
"data": {}
}
],
"total": 1
}
Response should include teamSlug
in the URL, type
, id
, name
, url
, active
, and creatorName
in the body.
GET /api/{teamSlug}/notifications HTTP/1.1
Host:
Accept: */*
[
{
"type": "discord",
"id": "text",
"name": "text",
"url": "text",
"active": true,
"creatorName": "text"
}
]
Request must include teamSlug
in the URL, type
, enabledEvents
, id
, name
, url
, and active
in the body. Response should list type
, enabledEvents
, id
, name
, url
, active
, and creatorName
.
POST /api/{teamSlug}/notifications HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 98
{
"type": "discord",
"enabledEvents": [
"deployment-created"
],
"name": "text",
"url": "text",
"active": true
}
{
"type": "discord",
"enabledEvents": [
"deployment-created"
],
"id": "text",
"name": "text",
"url": "text",
"active": true,
"creatorName": "text"
}
Request must include teamSlug
and notificationId
parameters in URL. Response should include type
, enabledEvents
, id
, name
, url
, active
, and creatorName
.
GET /api/{teamSlug}/notifications/{notificationId} HTTP/1.1
Host:
Accept: */*
{
"type": "discord",
"enabledEvents": [
"deployment-created"
],
"id": "text",
"name": "text",
"url": "text",
"active": true,
"creatorName": "text"
}
Request must include teamSlug
in the URL, type
, enabledEvents
, id
, name
, url
, and active
in the body. Response should include type
, enabledEvents
, id
, name
, url
, active
, and creatorName
.
PUT /api/{teamSlug}/notifications/{notificationId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 98
{
"type": "discord",
"enabledEvents": [
"deployment-created"
],
"name": "text",
"url": "text",
"active": true
}
{
"type": "discord",
"enabledEvents": [
"deployment-created"
],
"id": "text",
"name": "text",
"url": "text",
"active": true,
"creatorName": "text"
}
Request must include teamSlug
and notificationId
in URL.
DELETE /api/{teamSlug}/notifications/{notificationId} HTTP/1.1
Host:
Accept: */*
No content
Request must include teamSlug
and notificationId
in URL.
POST /api/{teamSlug}/notifications/{notificationId}/test HTTP/1.1
Host:
Accept: */*
No content
Response should include id
, name
, description
and technologies
of templates.
GET /api/templates HTTP/1.1
Host:
Accept: */*
[
{
"id": "text",
"name": "text",
"description": "text",
"technologies": [
"text"
]
}
]
Request must include type
, id
, and name
. Response should include id
, name
, description
, type
, and audit
log details of templates.
POST /api/templates HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 87
{
"type": "versionless",
"id": "text",
"teamSlug": "text",
"name": "text",
"description": "text"
}
{
"type": "versionless",
"description": "text",
"audit": {
"createdAt": "2025-07-05T16:10:44.530Z",
"createdBy": "text",
"updatedAt": "2025-07-05T16:10:44.530Z",
"updatedBy": "text"
},
"id": "text",
"name": "text"
}
Request must include templateId
.
GET /api/templates/{templateId}/image HTTP/1.1
Host:
Accept: */*
No content
teamSlug
is required in URL. Response should include users
, number of auditLogEntries
, projects
, versions
, deployments
, failedDeployments
, details of nodes
, latestDeployments
and auditLog
entries.
GET /api/{teamSlug}/dashboard HTTP/1.1
Host:
Accept: */*
{
"users": 1,
"auditLog": 1,
"projects": 1,
"versions": 1,
"deployments": 1,
"failedDeployments": 1,
"onboarding": {
"signUp": {
"done": true,
"resourceId": "text"
},
"createTeam": {
"done": true,
"resourceId": "text"
},
"createNode": {
"done": true,
"resourceId": "text"
},
"createProject": {
"done": true,
"resourceId": "text"
},
"createVersion": {
"done": true,
"resourceId": "text"
},
"addImages": {
"done": true,
"resourceId": "text"
},
"addDeployment": {
"done": true,
"resourceId": "text"
},
"deploy": {
"done": true,
"resourceId": "text"
}
}
}
Response should include description
, icon
, url
, id
, and name
. teamSlug
is required in URL.
GET /api/{teamSlug}/storages HTTP/1.1
Host:
Accept: */*
[
{
"description": "text",
"icon": "text",
"url": "text",
"id": "text",
"name": "text"
}
]
Creates a new storage. Request must include teamSlug
in URL, body is required to include name
, and url
. Request body may include description
, icon
, accesKey
, and secretKey
. Response should include description
, icon
, url
, id
, name
, accessKey
, secretKey
, and inUse
.
POST /api/{teamSlug}/storages HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 101
{
"name": "text",
"description": "text",
"icon": "text",
"url": "text",
"accessKey": "text",
"secretKey": "text"
}
{
"description": "text",
"icon": "text",
"url": "text",
"id": "text",
"name": "text",
"accessKey": "text",
"secretKey": "text",
"inUse": true
}
Response should include id
, and name
. teamSlug
is required in URL.
GET /api/{teamSlug}/storages/options HTTP/1.1
Host:
Accept: */*
[
{
"id": "text",
"name": "text"
}
]
Get the details of a storage. Request must include teamSlug
and storageId
in URL. Response should include description, icon, url, id
, name
, accessKey
, secretKey
, and inUse
.
GET /api/{teamSlug}/storages/{storageId} HTTP/1.1
Host:
Accept: */*
{
"description": "text",
"icon": "text",
"url": "text",
"id": "text",
"name": "text",
"accessKey": "text",
"secretKey": "text",
"inUse": true
}
Updates a storage. Request must include teamSlug
and storageId
in URL. name
, and url
must be included in body. Request body may include description
, icon
, accesKey
, and secretKey
.
PUT /api/{teamSlug}/storages/{storageId} HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 101
{
"name": "text",
"description": "text",
"icon": "text",
"url": "text",
"accessKey": "text",
"secretKey": "text"
}
No content
Deletes a storage. Request must include teamSlug
and storageId
in URL.
DELETE /api/{teamSlug}/storages/{storageId} HTTP/1.1
Host:
Accept: */*
No content
NODE_ENV=development
# # Development configurations
# Kratos public API
KRATOS_URL=http://localhost:8000/kratos
# Kratos admin API
# This should never be exposed
KRATOS_ADMIN_URL=http://localhost:4434
DATABASE_URL="postgresql://username:password@localhost:5432/crux?schema=public"
CRUX_UI_URL=http://localhost:8000
CRUX_POSTGRES_PASSWORD="Random_Generated_String"
# # Port settings
# Agent gRPC API port
GRPC_AGENT_PORT=5000
# RestAPI port
HTTP_API_PORT=1848
# Prometheus metrics port
METRICS_API_PORT=1956
# Podman has different alias host.containers.local:5000
CRUX_AGENT_ADDRESS=localhost:5000
# Signing secret for the generated JWTs
JWT_SECRET=jwt-secret-token
# Secret key for encrypting stored credentials
# Can be generated using the CLI
# Example: docker run --rm ghcr.io/dyrector-io/dyrectorio/cli/dyo:latest generate crux encryption-key
ENCRYPTION_SECRET_KEY=
# The old encryption key used to decrypt existing secrets while rotating keys
# ENCRYPTION_DEPRECATED_KEY=
# The Docker image tag in the node install script
# Uncomment to use a different agent version
# Defaults to the version of dyrector.io
# CRUX_AGENT_IMAGE=latest
# Uncomment to prevent the install script from
# overwriting your locally built agent image
# AGENT_INSTALL_SCRIPT_DISABLE_PULL=true
# Possible values: trace, debug, info, warn, error, and fatal
# The settings above come in a hierarchic order
# Example: error contains fatal
LOG_LEVEL=debug
# # Email service config
# SMTP URL for the mailslurper
SMTP_URI=smtps://test:test@localhost:1025/?skip_ssl_verify=true&legacy_ssl=true
# E-mail address for dyrector.io invitation links, password resets and others
[email protected]
# E-mail sender name for dyrector.io invitation links, password resets and others
FROM_NAME=dyrector.io
# Google ReCAPTCHA config
DISABLE_RECAPTCHA=true
# Required only when ReCAPTCHA is enabled
RECAPTCHA_SECRET_KEY=<recaptcha_secret_key>
# Registry label fetching config
DISABLE_REGISTRY_LABEL_FETCHING=false
# Determines the maximum quantity of lines returned from a container
MAX_CONTAINER_LOG_TAKE=1000
# Determines how much time an agent callback has to execute
AGENT_CALLBACK_TIMEOUT=5000
# Maximum accepted message size sent by the agent in bytes
# defaults to 4 Megabytes
# MAX_GRPC_RECEIVE_MESSAGE_LENGTH=4194304
# GRPC Timeout values and their respective defaults
# GRPC_KEEPALIVE_TIMEOUT_MS=5000
# GRPC_KEEPALIVE_TIME_MS=30000
# HTTP2_MINPINGINTERVAL_MS=30000
# HTTP2_MINTIMEBETWEENPINGS_MS=10000
# For overriding the node DNS result order
# regardless of the NODE_ENV value
# It may be necessary for running the e2e tests,
# because node resolves localhost to IPv6 by default
# DNS_DEFAULT_RESULT_ORDER=ipv4first
# To turn off quality assurance telemetry
# defaults to false
# more info: https://docs.dyrector.io/learn-more/quality-assurance-qa
# QA_OPT_OUT=true
# For providing a group identifier codename for the collected usage data
# QA_GROUP_NAME=
# Generic config options
# filled with defaults where it's applicable
DEFAULT_LIMITS_CPU=100m
DEFAULT_LIMITS_MEMORY=128Mi
DEFAULT_REQUESTS_CPU=50m
DEFAULT_REQUESTS_MEMORY=64Mi
DEFAULT_VOLUME_SIZE=1G
# GRPC_TOKEN=jwt
IMPORT_CONTAINER_IMAGE=rclone/rclone:1.57.0
INGRESS_ROOT_DOMAIN=
READ_HEADER_TIMEOUT=15s
DEBUG=true
DEBUG_UPDATE_ALWAYS=false
DEBUG_UPDATE_USE_CONTAINERS=true
# DAgent specific options
AGENT_CONTAINER_NAME=dagent
DAGENT_IMAGE=ghcr.io/dyrector-io/dyrectorio/dagent
DAGENT_NAME=dagent-go
DAGENT_TAG=latest
# This should match the mount path that is
# the root of configurations and containers
DATA_MOUNT_PATH=/srv/dagent
DEFAULT_TAG=latest
DEFAULT_TIMEOUT=5s
GRPC_KEEPALIVE=60s
# Path of 'docker.sock' or other local/remote
# address where we can communicate with docker
HOST_DOCKER_SOCK_PATH=/var/run/docker.sock
# Containers mount path default
INTERNAL_MOUNT_PATH=/srv/dagent
# Loglines to skip if not defined on the request
LOG_DEFAULT_SKIP=0
# Loglines to take
LOG_DEFAULT_TAKE=100
MIN_DOCKER_VERSION=20.10
# E-mail address to use for dynamic certificate requests
TRAEFIK_ACME_MAIL=
TRAEFIK_ENABLED=false
# Loglevel for Traefik
# Set to "DEBUG" to access Traefik dashboard
TRAEFIK_LOG_LEVEL=
# Whether to enable Traefik TLS or not
TRAEFIK_TLS=false
DEFAULT_REGISTRY=index.docker.io
# Token used by the webhook to trigger the update
WEBHOOK_TOKEN=
# Generic config options
# filled with defaults where it's applicable
DEFAULT_LIMITS_CPU=100m
DEFAULT_LIMITS_MEMORY=128Mi
DEFAULT_REQUESTS_CPU=50m
DEFAULT_REQUESTS_MEMORY=64Mi
DEFAULT_VOLUME_SIZE=1G
# GRPC_TOKEN=
IMPORT_CONTAINER_IMAGE=rclone/rclone:1.57.0
INGRESS_ROOT_DOMAIN=
READ_HEADER_TIMEOUT=15s
DEBUG=true
DEBUG_UPDATE_ALWAYS=false
DEBUG_UPDATE_USE_CONTAINERS=true
DEFAULT_REGISTRY=index.docker.io
# Crane specific options
# Put 'true' to use in-cluster auth
CRANE_IN_CLUSTER=false
# The duration amount that for a kubernetes API request to complete
DEFAULT_KUBE_TIMEOUT=2m
# Field manager name
FIELD_MANAGER_NAME=crane-dyrector-io
# Use 'Force: true' while deploying
FORCE_ON_CONFLICTS=true
# The key/label name for audit purposes
KEY_ISSUER=co.dyrector.io/issuer
# The "kubectl" configuration location
KUBECONFIG=
# Timeouts used in tests, no effect on deployment
TEST_TIMEOUT=15s
# For injecting SecretPrivateKey
SECRET_NAME=dyrectorio-secret
SECRET_NAMESPACE=dyrectorio
CRUX_UI_URL=http://localhost:8000
KRATOS_URL=http://localhost:8000/kratos
KRATOS_ADMIN_URL=http://localhost:4434
# Sets the severity level of logging
# Possible values: trace, debug, info, warn, error, and fatal
# The settings come in a hierarchic order
# Example: error contains fatal
LOG_LEVEL=trace
# # Google ReCAPTCHA config
DISABLE_RECAPTCHA=true
# Required only when ReCAPTCHA is enabled
RECAPTCHA_SITE_KEY=<public_recaptcha_site_key>
RECAPTCHA_SECRET_KEY=<recaptcha_secret_key>
# # Playwright test config (for e2e tests)
E2E_BASE_URL=http://localhost:8000
# Docker HUB Proxy (optional)
# HUB_PROXY_URL=http://<proxy_url>
# HUB_PROXY_TOKEN=<proxy_token>
# For overriding the node dns result order regardless of the NODE_ENV value
# It may be necessary for running the e2e tests,
# because node resolves localhost to IPv6 by default
# DNS_DEFAULT_RESULT_ORDER=ipv4first