Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Registries are 3rd party registries where the images of versions are located. Learn more about registries here.
There are two kinds of projects in dyrector.io: versionless and versioned. Versionless projects make up one deployable unit without versioning, while versioned projects come with multiple rolling or incremental versions. More details here.
Versions belong to versioned projects. Versionless projects act similar to a rolling version of a versioned project.
The purpose of versions is to separate different variations of your project. They can be either rolling or incremental. One versionless project can have multiple versions of both types. More details about rolling and incremental versions here.
Images make up a versioned project's version, or a versionless project.
Teams are the shared entity of multiple users. The purpose of teams is to separate users, nodes and projects based on their needs within an organization. Team owners can assign roles. More details about teams here.
Users/Me cover endpoints related to your user profile.
Deployments are the process that gets the installation of your versions or versionless projects done on the node of your choice. More details about deployments here.
Tokens are the access tokens that grant you access to a user profile and the teams the profile is a member of.
Nodes are the deployment targets. Nodes are registered by installing at least one of the agents – crane for Kubernetes, dagent for Docker. These agents connect the platform to your node. One team can have as many nodes as they like.
Node installation takes place with Shell or PowerShell scripts, which can be created or revoked. More details here.
Audit log is a log of team activity generated by the platform.
Health refers to the status of the different services that make up the platform. It can be checked to see if the platform works properly.
Notifications are chat notifications in Slack, Discord, and Teams. They send an automated message about deployments, new versions, new nodes, and new users. More details here.
Templates are preset applications that can be turned into a project right away. They can be deployed with minimal configuration. More details about templates here.
Dashboard summarizes the latest activities of a team.
Storages are S3 compatible memory storages. They can be used for file injection. More details here.
This is an extremely compressed guide to new users who would like to give cloud-hosted dyrector.io a look as quick as possible.
Having access to a target environment (Docker or Kubernetes) is a requirement. More details here.
Create a team.
On the nodes page add a new Docker or Kubernetes orchestrated node.
Add a registry to the platform. Docker Hub is available by default. Bypass this step and step 5 by saving one of the Templates as a project.
Add a deployment to your project.
Configure the images if needed.
Deploy. 🎬
To better understand how you’ll be able to manage your applications on the platform, there are certain components – entities and concepts – within the platform that need to be cleared.
A group of users working on the same project. Without an invite to a team, on your first login you must define a team. Later you can invite others to your team, they will have access to every data and components, including configurations and secrets that belong to the team. To limit access to the data of one team, create a new team that'll have its own components.
Teams can have multiple Nodes, Projects and Registries.
Nodes are the deployment target environments where either of the agents are installed. A node can be any cloud or on-premises infrastructure. Without registering a node, your team can't use the platform. We suggest registering a node to be the first step you do after creating your team.
dyrector.io doesn't provide infrastructure. You can use the platform with your already existing infra.
Node setups require admin or root privilege. Without that, it's not possible to install the platform's agent on your node.
If you're curious about the install scripts of the agent, you can check them out at the link below:
Container registries where the images that you plan to deploy are stored. You can use any Docker Registry API V2 compatible registry with the platform, including Docker Hub, GitHub, GitLab, Google, Azure registries. Unchecked sources are supported, too, as long as they're V2 compatible. Docker Hub Library is available to every user by default.
Projects are the fundamental building blocks of dyrector.io, as these are the deployable units that contains the images with the corresponding configuration. These are the stacks you’ll manage in dyrector.io. There are two types of Projects, as seen below.
Version Type
Rolling
Incremental
❌
Rollbacks
❌
✅
❌
History
Previous version is overwritten
Image and configuration inheritance
❌
Ideal for
Nightly Versions
Production
Testing, Single-container stacks
Versionless Project: these projects have only one hidden version and cannot be rolled back. These are mostly useful for testing purposes.
Versioned Project: versioned projects can have multiple versions. The different versions can have different configuration and images. Semantic versioning is suggested.
Deployment is the process of delivering the images that make up a project to the environment you added as a node by installing an agent on it. You can assign environment and configuration variables to the deployments, and also edit deployments depending on the type of project you want to set up on your node.
Each deployed project comes with a prefix. Prefixes are alphanumeric strings in the container names to signal the deployment of a version to a node. Prefixes are comprehended within nodes.
The purpose of this is to avoid duplications of a stack on an environment.
Rolling version
Rolling versions have one deployment per version on each node, because a new deployment will overwrite the existing stack on the node. These type of versions can have multiple deployment prefixes since you can deploy them to multiple nodes. Only In progress
deployments of rolling versions aren't mutable and deletable. Once the deployment of a rolling version is complete, you can't adjust or delete that.
Incremental version
You can deploy multiple incremental versions of the same project to a node, therefore different incremental versions can have the same deployment prefix. The first deployment of an incremental version is mutable as long as it's not In progress
. Incremental versions can have multiple deployments with the same prefix, but only one of them can have Preparing status. Only In progress deployments can't be deleted. After deploying an incremental version, the deployment will get one of the two statuses below:
Successful: successful deployments remain immutable. After successful deployments, you can roll back the previous version with the corresponding database.
New successful deployments turn previous ones that belong to older versions obsolete
. When a previous version gets rolled back, the deployment that belongs to it turns all the deployments coming after downgraded
.
Failed: if a deployment comes back failed, it can be mutable.
Versionless projects come without a version. The purpose of versionless projects is to reduce infrastructure maintenance overhead.
Use cases:
Messaging queues: RabbitMQ, MQTT, etc.
Proxies: Traefik, NGINX, etc.
Databases: Postgres, MySQL, etc.
Protected deployments
There are protected deployments that prevent you from overwriting your deployments from another project. This is a helpful measure when you manage dozens or hundreds of environments, and a misplaced click can hurt your users by an outage.
Rolling versions: You can't deploy it if a protected deployment exists with the same prefix on the node.
Incremental versions: You can't deploy it if a protected deployment exists of a different version with the same prefix on the node.
Audit logs collect team activity. It lists executed actions, the users who initiated them and the time of when the actions happened.
Logs are assigned to teams.
Your user profile. One profile can belong to multiple teams.
The platform abstracts away interactions with containers. You can deploy images, start, restart, and delete containers with the platform. Logs and container inspection allows you to dig deeper into a container's operation when needed.
If you're passionate about self-hosting the tools you use every day, you can add all of your environments to the platform as nodes, where you can manage all the containers you run. Instead of setting up dashboards and adding each service individually, you can interact with all of your ecosystem through one GUI.
Pro tip: you can self-manage dyrector.io, too.
The key purpose of multi-instance deployments is to avoid repetitive tasks when the same stack needs to be deployed to dozens or hundreds of deployment targets. After configuring the stack once, you're able to select all the nodes where you'd like to set it up.
Below you can see a flowchart that illustrates how you can deploy the same stack to multiple environments at the same time.
Another scenario is when a 3rd-party redistributes the business application your organization develops. In the flowchart below you can see how this process differs from direct distribution as described above.
In progress: Bundled configurations enable your team to assign templatized configurations through the whole process.
QA, PMs and salespeople can spark up your stack instantly on their own by deploying the stack to their local machine or a demo environment as a project.
Installing Next.js apps with NGINX to a VPS or a Kubernetes cluster
Installing single images (like RabbitMQ or a database)
Checking server/cluster status
In short: developers and DevOps engineers. But there's always a bit more nuanced answer.
If you enjoy developing software more than configuring the containerized app you just created, dyrector.io is for you.
The platform helps you to spark up your containerized stack on any infra hassle-free, so you can spend more time doing things you like.
Imagine the platform as a hub where all the components of your infrastructure can be accessed, and containers can be started, shut down or restarted.
This hub not only grants you access to these resources, but also enables your teammates to interact with your applications in a self-service manner. You'll be only needed to help them with deployments and troubleshooting when necessary.
Tinkering with stuff is your passion, and we're here to help you with that. If a project you'd like to use has an OCI compatible image, you can set it up on your local machine, on-premises server or cloud infrastructure.
Tip for self-hosting enthusiasts: you can self-manage dyrector.io for free, unlimited, forever.
dyrector.io is the right platform for startups eyeing containerization. The platform's release management capabilities help you spend your funds on meaningful stuff instead of wasting valuable engineering hours on repeated, mundane tasks.
As an organization, you might already have invested in some type of infrastructure. Moving your services from that infrastructure is painful and resource-heavy. With the platform, it's completely unnecessary, because the platform can be used with your infrastructure right away. And in case you decide to leave the platform, you can do so by exporting YAML-files to avoid losing your data.
dyrector.io consists of 2 major components:
an agent that's installed on your environment where you want to deploy images – crane is the agent that communicates with Kubernetes API and dagent is the agent for communication with Docker API, both written in Go –,
and a platform (UI developed in React.js, Next.js. Backend developed in Node.js, Nest.js). Communication between the agents and the platform takes place in gRPC with TLS encryption. The data is managed in a PostgreSQL database mapped by Prisma ORM.
Alpha is suggested for non-production purposes. If you want to use it for production, reach out to us at hello@dyrector.io.
Self-hosted dyrector.io is free and will always be, without feature and usage restrictions.
It's an open-source continuous delivery platform offering self-service deployment and configuration management capabilities via a GUI and API to any cloud or on-premises infrastructure.
You can find the project's repository on GitHub.
You already have a Kubernetes cluster and want to manage it with ease.
Multiple users on your team need to have access to your containers for management.
You'd like to configure versions once, without repetition.
Your team needs self-service deployments for QA or sales purposes.
A delivery platform that helps you by substituting Docker and Kubernetes command line interactions with abstractions. You're able to configure any OCI compatible containers with a configuration screen, and in a JSON-editor, as well.
You can use the platform by installing its agents on your infrastructure. The agent will communicate with the platform to conduct interactions with containers running on your infra.
The platform is ready to interact with your already existing infrastructure and clusters right away. The platform interacts with any existing cloud & on-premises infrastructure, as we don't offer infra. You can deploy images from any registry you have access to.
The platform can be used without moving your services to a brand new infrastructure. To provide quick setup, we don't offer any infrastructure to our users.
Chat notifications on Discord, Slack, and Teams let you and your team know about newly made deployments and versions to increase collaboration.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select GitHub Registry type.
Step 4: Select if you'd like to add an organization or a user.
Step 5: In the corresponding fields, enter:
Your GitHub user name,
And your organization’s or your user's GitHub name.
Only classic tokens are supported. Fine-grained tokens aren't supported yet.
Step 6: Click ‘Save’ button on the top right.
Docker Hub is a public image library. You can add registries from Docker Hub with the following steps.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Docker Hub type.
Step 4: Enter the registry’s name or your username in Docker Hub in the ‘Organization name or username’ field.
Step 5: Click ‘Save’ button on the top right.
Nodes are the environments that'll be the target of your deployments.
If you're curious about the install scripts of the agent, you can check them out at the links below:
Step 1: Open Nodes on the left and click ‘Add’ on top right.
Step 2: Enter your node’s name and select its icon.
Tip: You can write a description so others on your team can understand what’s the purpose of this node.
Step 3: Click ‘Save’ and select the type of technology your node uses. You can select
Docker Host,
and Kubernetes Cluster.
Docker Host requirements are the following:
a Linux host/VPS,
Docker or Podman installed on host,
ability to execute the script on host.
Kubernetes Cluster requirements are the following:
a Kubernetes cluster,
kubectl authenticated, active context selected,
ability to run these commands.
Users are able to opt-in to install Traefik, as well. In that case they need to add an ACME email address where they'll be updated when their certification expires.
Step 4: Depending on your node's OS, select whether you'd like to generate a shell or a powershell script. Shell scripts are supported on Linux nodes, powershell scripts are designed to be used with Windows nodes.
Step 5: After picking the technology and the script's type, click the ‘Generate script’ button to generate a one-liner script.
Step 6: Run the one-liner in sh or bash.
The one-liner will generate a script that’ll set the platform’s agent up on your node.
Information and status of your node will show on the node's page, so you can see if the setup is successful right away.
Now you're ready to setup your product and one step closer deploy your application.
You can add registries available from both self-managed or SaaS GitLab.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select GitLab Registry type.
Step 4: Select if you'd like to add a group or a project.
Step 5: In the corresponding fields, enter:
Your GitLab username,
And your organization’s or your project's GitHub name. You can find either under their name in their main pages.
Step 6: Make sure the Self-managed GitLab toggle is off. If you select GitLab Registry type, SaaS should be set by default.
Step 7: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s Nname and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select GitLab Registry type.
Step 4: In the corresponding fields, enter:
your GitLab username,
And your organization’s GitHub name.
Step 5: Turn on self-managed GitLab toggle.
Step 6: Enter GitLab Registry’s URL and the GitLab API URL without the https prefixes.
Step 7: Click ‘Save’ button on the top right.
Registries are where the images you'd like to deploy are collected. While there are multiple registries explicitly supported, all OCI registries are supported. You can add registries from the following sources:
V2 Registries are Docker Registry HTTP API V2 compatible. Both private and public registries are supported.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select V2 Registry and switch the toggle under the URL field to ‘Private’.
Step 4: In the corresponding fields, enter:
URL of your registry without the /v2 suffix,
username, and
the token or password which you use to access it.
Step 5: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an Icon below.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select V2 Registry type and switch the toggle under the URL field to ‘Public’.
Step 4: Enter the URL of your registry without the /v2 suffix.
Step 5: Click ‘Save’ button on the top right.
Personal access token generated in GitHub with the steps documented . Select the repo
and read:packages
scopes. Repo
scope is required due to GitHub's limitations. You're still able to add public GitHub registries, too.
Node setups require admin or root privilege. Without that, it's not possible to dyrector.io's agent on your node in the case of both Docker and Kubernetes.
Traefik's Docker instance is only supported on Linux. Further details about it at the official Docker .
Your password or access token generated in GitLab with the steps documented . Select the read_api
and read_registry
scopes.
your password or access token generated in GitLab with the steps documented . Select the read_api
and read_registry
scopes.
Projects are the deployable units made up of images and their configuration you're going to manage through the platform.
Version Type
Rolling
Incremental
❌
Rollbacks
❌
✅
❌
History
Previous version overwritten
Image and configuration inheritance
❌
Ideal for
Nightly Versions
Production
Testing, Single-container stacks
The different project types have different deployment capabilities. For more details about the differences, check out the Components section.
Versionless projects have one abstracted-away version and cannot be rolled back. These are mostly useful for testing purposes.
Versioned projects have two types of versions: rolling and incremental. You can define one version per versioned project as default. That'll be the version future versioned projects will inherit images and configurations later until you set another one as default.
Rolling versions are similar to simple projects except they’re perfect for continuous delivery. They’re always mutable but contrary to incremental projects they aren’t hierarchic and lack a version number.
Incremental versions are hierarchical. They can have a child version and once a deployment is successful, the deployed versions, the environment variables, and the deployed images can never be modified. This guarantees you’re able to roll back the deployed version and reinstate the last functional one if any error occurs to avoid downtime.
Versioned projects can have both rolling and incremental versions.
If you’ve created a versioned project, you’ll need to add a version to it. This version could be a rolling or an incremental version.
Step 1: Open Projects page on the left and select the project you want to set a version to.
Step 2: Click ‘Add version’ on the top right.
Step 3: Define a name for your project’s version in the name field.
Step 4: Enter a changelog if you’d like to.
Step 5: Select if you’d like to make this a rolling or an incremental version. Click here for more details on the difference between the two.
Step 6: Select if you’d like to turn this into a default version all the future versions will automatically inherit images and their configurations.
Step 7: Click ‘Save’ on top right.
Step 1: On the Product tab, click ‘Add’ on top right.
Step 2: Enter the Product’s name.
Tip: You can write a description so your teammates can understand what’s the purpose of this Complex Product.
Step 3: Select Complex on the switch under description.
Step 4: Click ‘Save’. You’ll be directed to the Product tab. Select the Product you just created.
Step 5: Click ‘Add Version’.
Step 6: Enter a name and a changelog for the new version. At this point, you can choose whether you’d like to create a Rolling or an Incremental product.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Google Registry and switch the toggle under the URL field to ‘Private’.
Step 4: In the corresponding fields, enter the organization name. Upload the JSON key file you can generate as documented here. In the Google Cloud documentation you only need to follow instructions until the 2nd part.
Step 5: Click ‘Save’ button on the top right.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon below.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Google Registry type and switch the toggle under the URL field to ‘Public’.
Step 4: Enter the Organization name.
Step 5: Click ‘Save’ button on the top right.
Versionless projects have only one abstracted-away version and cannot be rolled back. These are mostly useful for testing purposes or single-container stacks.
Step 1: On the Project tab, click ‘Add’ on top right.
Step 2: Enter the project’s name.
Tip: You can write a description so your teammates can understand what’s the purpose of this project.
Step 3: Select the versionless type under the description.
Step 4: Click ‘Save’. You’ll be directed to the Project tab. Select the project you just created.
Step 5: Click ‘Add Image’.
Step 6: Select the Registry you want to filter images from.
Step 7: Type the image’s name to filter images. Select the image by clicking on the checkbox next to it.
Step 8: Click ‘Add’.
Step 9: Click on the ‘Tag’ icon under the actions column left to the bin icon. This will allow you to select a version of the image you picked in the previous step.
You can define environment configurations to the selected image by clicking on the gear icon on the right. For further adjustments, click on the JSON tab where you can define other variables. Copy and paste it to another image when necessary. Learn more about Configuration management here.
Step 10: Click ‘Add Image’ to add another image. Repeat until you have all the desired images included in your product.
Rolling versions are similar to versionless project in a way that they come with a version that can't be rolled back. They’re always mutable but contrary to incremental versions, they aren’t hierarchic and lack a version number.
Step 1: After picking the Rolling tag, click ‘Save’. You’ll be directed to the Project tab. Select the project again.
Step 2: Click ‘Add version’.
Step 3: Enter the rolling version's name and specify a changelog.
Step 4: Click 'Save'. You'll be directed to the board of versions of your versioned project.
Step 5: Click 'Images' button in the card that belongs to the version you'd like to assemble.
Step 6: Click 'Add image'.
Step 7: Select the Registry you want to add images from.
Step 8: Type the image’s name to filter images. Select the image by clicking on the checkbox next to it.
Step 9: Click ‘Add’.
Step 10: Pick the ‘Tag’ icon next to the bin icon in the actions column to pick a version of the image you selected in the previous step.
Now you can define environment configurations to the selected image. For further adjustments, click on the JSON tab where you can define other variables. Copy and paste it to another image when necessary. Learn more about Configuration management here.
Step 11: Click ‘Add Image’ to add another image. Repeat until you have all the desired images included in your product.
Unchecked means image availibility checking is disabled. Any HTTP API V2 compatible registry can be added this way, but images and their tags cannot be browsed when assembling a project. You can add an image to a project from an unchecked registry by entering the image's exact name and when needed, the tag.
When you add images from an unchecked registry to a project, and the image's name doesn't match, the deployment will fail.
Step 1: Open Registries on the left and click ‘Add’ on the top right.
Step 2: Enter your registry’s name and select an icon.
Tip: You can write a description, so others on your team can understand what’s the purpose of this registry.
Step 3: Select Unchecked Registry and switch the toggle under the URL field to ‘Private’.
Step 4: In the corresponding fields, enter the registry's URL.
Step 5: Click ‘Save’ button on the top right.
Deployment is the key feature of the platform. It's the process of setting up the images on your node.
Deployment workflows are similar for each type and version of projects but there's difference between the capabilities. You can find out more about the differences of deployment capabilities between versionless and versioned projects here.
Step 1: Open the project or version you would like to deploy. To demonstrate the process, we used a versionless project.
Step 2: Click 'Add deployment'.
Step 3: Select the node in the 'Add deployment' block. After that, click 'Add' on the top right corner of the block.
Step 4: The images of the project will be listed. By clicking on the gear icon, you are able to define and adjust configuration variables. Learn more about Configuration management here.
Step 5: Click 'Deploy'. If everything goes right, deployment status should switch from 'Preparing' to 'In progress'. When deployment's complete the status should turn 'Successful'.
You can see status change for each image on the 2nd picture below.
Protected deployments
There are protected deployments that prevent you from overwriting your deployments from another project. This is a helpful measure when you manage dozens or hundreds of environments, and a misplaced click can hurt your users by an outage.
Rolling versions: You can't deploy it if a protected deployment exists with the same prefix on the node.
Incremental versions: You can't deploy it if a protected deployment exists of a different version with the same prefix on the node.
Deleting a deployment will only remove the containers from the platform. Infrastructure related data, including volumes and networks, will remain stored on the node.
When you can't deploy a version or a project because node status turns outdated, you should navigate to your node's edit page and update the agent by clicking the Update button.
Deployment – Initiate deployments to a single or multiple deployment environments called nodes. A node can be in any cloud, VPN, or on-premises environment.
Release & Configuration management – One-time configuration for releases in a configuration screen or a JSON. Configure releases in real time with your teammates. In progress: Specify bundled configuration variables instead of going through them one-by-one, repeatedly.
Instant test environments – Spark up your stack in an instant on your local machine after adding it as a node without assistance.
Monitoring – Check container and deployment statuses via the platform and interfere when required.
Audit log – Audit log allows teams to trace back activity that might have caused an anomaly.
Chat notifications – Automated chat messages on Discord, Slack, and Teams when a teammates makes an action on the platform, so they don't have to go out of their way to let others know about completed tasks.
Changelogs – You're able to create changelogs at ease based on commit messages with the help of Conventional Commits, so your team understands the purpose of new versions. This simplifies communication between departments who work on the product and outsiders, for example decision-makers. The generated changelogs can be sent out via e-mail or any chatbot integration.
Secret management – Store and manage sensitive data with our Vault integration powered by HashiCorp.
RBAC – Role based access control lets you distribute privileges based on their responsibilities to your teammates. This is important to prevent any wrongdoing in case a user profile gets corrupted.
ChatOps solutions – Interact with the platform and your stack via chat messages on the chat platform your team uses.
What you'll need:
An S3 API compatible storage available
Step 1: Navigate to Storage on the platform and click the Add
button at the top right corner.
Step 2: Enter the following information to their respective fields:
Name
Your storage's URL
Access key
Secret key
Step 3: Click Save
.
Step 1: Navigate to your projects and select the project that has the container you'd like to inject files to.
When the container is under a versioned project version's, there's an extra step you'll need to take by selecting the version you desire to deploy.
Step 2: Click on the gear icon on the right side of the container.
Step 3: If it's turned off, turn on the Storage
filter in the sub filters.
Step 4: Select the storage you'd like to inject files from, and enter the bucket path.
You can specify a folder within a bucket, too. (Example: "bucket/folder/sub-folder")
Incremental versions are hierarchical in a way that they can have child versions and once a deployment is successful, the deployed versions, the environment variables, and the deployed images can never be modified. Because of this, you’re able to roll back the deployed incremental version and reinstate the last functional version.
Step 1: After picking the incremental tag, click ‘Save’. You’ll be directed to the incremental version’s preview. Click ‘Add Version’.
Step 2: Enter a name and a change log for the new version.
Step 3: Click ‘Save’. You’ll be redirected to the project's version board.
Step 4: To add images, click ‘Images’ and ‘Add Image’ on the next view.
Step 5: Select the Registry you want to filter images from.
Step 6: Type the image’s name to filter images. Select the image by clicking on the checkbox next to it.
Step 7: Click ‘Add’.
Step 8: Pick the ‘Tag’ icon next to the bin icon under the actions column to pick a version of the image you selected in the previous step.
Step 9: Click ‘Add Image’ to add another image. Repeat until you have all the desired images included in your product.
Step 1: Open Projects and select the versioned project you’d like to increase.
Step 2: Click ‘Increase’ button under the version of the project you’d like to add a new version to.
Step 3: Enter the version's name and add changelog. Click ‘Save’.
Step 4: Click 'Add image’ to search for images you’d like to include in the new version. If you’d like to remove an image from the previous version, click on the red trash icon.
Step 5: Pick the ‘Tag’, which is a version of the image you selected in the previous step.
Chat notifications help you to get informed about your team's activity in an instant. As of now 3 platforms are supported:
Discord,
Slack,
and Microsoft Teams.
You can get notifications about 4 events:
new Node added,
new Version created,
successful and failed deployments,
new teammate invited.
Step 1: Click 'Notifications' on the left side.
Step 2: Select one of the 3 supported chat platforms.
Step 3: Enter the following data:
name of your notification,
webhook URL.
Step 4: Set the toggle for 'Active' or 'Inactive' notifications. You can change this later to activate or inactivate the notifications.
Step 5: Click 'Save' on the top right.
To test your notification, head to Notifications on the left, select the Notification you'd like to test and press the Test webhook message button.
You can inject files into a container using by following the steps below.
Now you can define environment configurations to the selected image. For further adjustments, click on the JSON tab where you can define other variables. Copy and paste it to another image when necessary. Learn more about Configuration management .
Before you start creating your Notifications, make sure you have the webhook ready. You can find how to create a webhook for , , and in their official documentations.
To setup Templates, users need a Node with either of the platform's agents. The deployment process is the same as . Some of the templates require a level of configuration before deployment. These are documented, as well.
Reach out to us at , or on if you'd like to request a new template.
Minecraft needs no introduction. You can setup a Minecraft Server by following the steps of deployments as documented here. Image is available with latest
tag as a template.
Once the deployment is successful, the server is ready to use at http://localhost:25565/ by default.
Vaultwarden is an unofficial self-hosted Bitwarden implementation. Image is available with latest
tag as a template.
After the Node where you'd like to run Vaultwarden is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, self-managed Vaultwarden is ready to use at localhost:80 by default.
Strapi is a CMF that allows developers to build APIs in Node.js. It's mainly used to build content-driven applications or websites. You can quickly set it up on your infrastructure. Both images of Strapi are available with latest
tag.
After the Node where you'd like to run Strapi is registered, you can setup Strapi by following the steps of deployments as documented here.
Once the deployment is successful, Strapi is ready to use at localhost:1337 by default, as seen below.
Google Microservices Demo is used to how to build and deploy microservices-based applications using Google Cloud Provider (GCP). Google Microservices Demo allows users to demonstrate use of technologies like Kubernetes/GKE, Istio, Stackdriver, gRPC and OpenCensus. Users can quickly set Google Microservices Demo up on your infrastructure.
All images are available with 0.4.1
tag in the template.
After the Node where you'd like to run Google Microservices Demo is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, Google Microservices Demo is ready to use at localhost:65534 by default, as seen below.
WordPress is the most popular CMS. More than 43% of all websites are managed with it. You can quickly set it up wherever you'd like to manage your content with WordPress. Image is available with latest
tag.
After the Node where you'd like to run WordPress is registered, you can setup WordPress by following the steps of deployments as documented here.
Once the deployment is successful, WordPress is ready to use at localhost:4444 by default, as seen below.
is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. The of MLflow is available with the 2.4.0
tag.
After the node where you'd like to run MLflow is , you can set it up by following the steps of deployments as documented .
Once the deployment is successful, MLflow is ready to use at by default, as seen below.
Self-managed GitLab is an open-source version of GitLab to manage your code. It's key advantage is that users can configure GitLab to their needs. You can quickly set it up on your infrastructure. Image comes with latest
tag in the template.
After the Node where you'd like to run GitLab is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, self-managed GitLab is ready to use at localhost:21080 by default.
Gitea is a painless self-hosted Git service. It is similar to GitHub, Bitbucket, and GitLab. Image is available with latest
tag on dyrector.io.
After the Node where you'd like to run Gitea is registered, you can set it up by following the steps of deployments as documented here.
Once the deployment is successful, Gitea is ready to use at http://localhost:3000/ by default, as seen below.
LinkAce is a self-hosted archive to collect links of your favorite websites. You can quickly set it up on your infrastructure. Template consists of MariaDB (10.7) and LinkAce image with simple
tag.
After the Node where you'd like to run LinkAce is registered, you can set it up by following the steps of deployments as documented here.
Requirements:
APP_KEY
environment variable must be 32 characters long.
After deployment, exec in the linkace-app container and add chmod 777
privileges to the \app\.env
file.
Once the deployment is successful, LinkAce is ready to use at localhost:6780 by default, as seen below.
Function is still in the works, anomalies might occur.
After the CI/CD pipeline builds and pushes the image to a container registry, the pipeline triggers the deployment on the platform. The platform automatically signals to the agent that it should pull and start the image with the tag that already exists on the node.
Step 1. Create a versioned project.
Step 2. Add a rolling version to the project.
Step 3. Add images to the version.
Step 4. Add a deployment to the version.
Step 5. Click Create in the Deployment token card.
Step 6. Enter a name for your deployment token and set its expiration time, then click Create. You'll need to generate a new one when the token expires.
Step 7. Save the token somewhere secure as you won't be able to retrieve it later. Click Close when you're done.
Step 8. Paste the curl command into the pipeline.
Never store your token in your git repository. Use pipeline secrets instead.
You can revoke the token by clicking on the Revoke token button in the Deployment token card.
Container routing is available for nodes where Traefik is enabled on node install.
Expose strategy of the container you wish to be exposed should be set to HTTP or HTTPS, instead of none. In the routing section of the configuration screen, you need to specify a domain and a port. The additional variables are optional.
When a path is specified, the Traefik container will exclusively route requests with URLs containing that designated path prefix.
Enabling path stripping will result in forwarding the request without including the specified path.
The generated Traefik router's name will be automatically generated as "prefix + name."
If the router uses HTTPS, all necessary Let's Encrypt labels will also be added. It's important to note that middlewares can only be applied by adding them as custom Docker labels.
If you have Docker labels set for your container, you can specify them as key-value pairs in the config screen under the Docker labels section.
Keep track of the actions your teammates execute. This way if a malfunction occurs, you can understand if it was due to a specific action or an outside factor.
We track user ID, time and unsensitive data related to actions on the platform. Audit logging on our platform only monitors actions executed by users via the platform.
We don't get access to self-managed team logs.
You can create configuration bundles which you're able to apply to deployments later. These are templatized key-value pairs for environment variables.
Configuration bundles are shared between the members of a team.
Configuration bundles act as .env files.
Multiple bundles can be applied to a deployment.
Keys can't conflict between bundles applied to a deployment.
Values of a bundle can be overwritten manually when you set up the deployment. This won't impact the value stored in the bundle.
Bundles and the deployments of incremental version
In the case of an incremental version's successful deployment, all the values of the applied bundles are copied to the deployment but connection between the bundles and the deployment are terminated. The purpose of this is when a bundle is modified, it won't interfere with the values specified for an existing deployment.
Monitoring allows you to get instant feedback whether a deployment was successful or not, as seen below.
Besides deployment feedback, the platform is useful when you need to check up on your infrastructure and all you need to do is check the platform to get logs.
Configurations can be all over the place without a single source of truth when left unmanaged for long periods of time. The more configurations you need to deal with, the more likely you’ll lose track of them. The platform can be used as single source of truth for all of your configurations, while being able to add, remove or modify configurations directly or via the JSON editor.
Every configuration you specify will remain stored after any modification or deletion to ensure you won’t have to spend time again defining already specified configurations.
Git repositories containing the configurations of your microservice architecture can be all over the place because one repo won’t cover all the configurations for all the images & components in your architecture. The platform substitutes Git repos by bringing every variable that belong to a specific Product in one place.
Think ahead: designing is the first step towards efficient and secure configuration management. Go through your organization’s structure, consider privileges and access points. This step is crucial for more efficient configuration management.
Configuration roll back: if it turns out the new configuration is faulty, you can roll back the last functioning ones.
Bundled configurations: instead of specifying the same configurations one by one to each component, you can apply variables to multiple components with one click by bundling them up.
You're able to define configurations for both images of a Project and Deployments. Variables that belong to images can be overwritten by deployment variables. You can also use sub filters to hide irrelevant variables to your configs. Below you can see all the variables for each filter – common, Kubernetes and Docker.
Common
Kubernetes
Docker
You're also able to customize your configuration in a JSON format, for easier copying.
The result should look like this:
File injection to containers is possible with the Storage function. It's S3 API compatible, as of now only Azure Blob Storage isn't supported.
We decided to go with S3 API compatible storages as it’s one of the most popular technologies that offer interoperability with a fair number of open-source projects. They represent a bunch of functions as flat structure file storages, including versioning, different types of access control, and so on.
One example of S3 API use cases is to upload the object via a REST endpoint and the object will be available through a simple URL.
Amazon's S3 solution isn’t open-source but a few S3 API compatible open-source implementations are listed below:
MinIO,
OpenIO,
Scality.
Self-managed setup is only supported in Docker as of now, Kubernetes support is in the works.
The docker-compose below is only designed for demoing the platform, we don't suggest using it for any other use.
1 CPUs
8 GB RAM
Docker or Podman installed
Disabling Sign Up
We don't have an option to disable signup, but you can restart the Kratos container with the following variables in the docker-compose:
SELFSERVICE_FLOWS_REGISTRATION_ENABLED=false
It should disable the registration flow and the platform will throw an error on the registration page.
Self-managed dyrector.io is free, unlimited, forever.
CLI is a tool for deploying a complete dyrector.io stack locally, for demonstration, testing, or development purposes.
Before using the CLI, make sure you have the following dependencies installed on your local machine:
Go (1.20 or higher) to run the go install
.
When you use Podman, make sure you have the network_backend
set to netavark
, if you have an older installation (before 4.0 released) as the old CNI network backend doesn't support name resolutions we use.
You also have to get the aardvark-dns plugin for the same reason.
If you use rootless containers, set your DOCKER_HOST
environmental variable correctly, because if it's missing, Docker will assume it's /var/run/docker.sock
and you can get misleading errors from it.
Step 1: Execute go install github.com/dyrector-io/dyrectorio/golang/cmd/dyo@main
Step 2: Execute dyo up
Step 3: After you navigated to localhost:8000
(this is the default traefik port) you will see a Login screen
Step 4: Register an account with whatever e-mail address you see fit (doesn't have to be valid one)
Step 5: Navigate to localhost:4436
where you will find your mail as all outgoing e-mails will land here
Step 6: Open your e-mail message and using the link inside you can activate your account
Step 7: Happy deploying! 🎬
Step 2: Open the project folder, and execute make up
– alias to go run ./golang/cmd/dyo up
– and wait until you get back the command prompt. It should take a few minutes the first time running, as it will pull a few docker images.
Step 3: Enter localhost:8000
in your browser's address bar. You're ready to use the platform.
The command-line interface (CLI) lets you run a complete the platform's development environment locally with the following services: UI Service (crux-ui), Backend Service (crux), PostgreSQL databases, Authentication, Migrations, and SMTP mock mail server. The CLI also runs the migration services. The default container names are listed below:
dyo-stable_traefik
dyo-stable_crux-postgres
dyo-stable_kratos-postgres
dyo-stable_kratos
dyo-stable_kratos-migrate
dyo-stable_crux
dyo-stable_crux-migrate
dyo-stable_mailslurper
dyo-stable_crux-ui
As you seen above you can start the application with the up
subcommand, after you finished your work, you can stop and remove the containers with the down
subcommand.
Running the stack again without stopping it will result in containers stopped, removed then recreated.
The CLI generates a settings.yaml
file containing the default configurations if the program doesn't find a configuration on the given path, or a default path if there isn't. Default path is dependant on your OS, you can find these on:
Linux: $XDG_CONFIG_HOME/dyo-cli/settings.yaml
where the $XDG_CONFIG_HOME
usually resolving to $HOME/.config
.
Mac OSX: $HOME/Library/Application Support/dyo-cli/settings.yaml
.
Windows: %AppData%/dyo-cli/settings.yaml
.
The settings.yaml file contains the following:
Please note, this file stores some state too, in this case passwords and secrets. These have to match to use the installation multiple times as with bad passwords you won't be able to update or use the databases.
To get usage tips and learn more about available commands from within CLI, run the following as we described above:
We’d love to hear from you!
It is possible to use container registries with self-signed (private) certificates. Be warned, it is an advanced, less convenient matter.
You can opt for using unchecked registries, that means images are not checked at all, image URLs are passed to the agent straight away. This way you can skip setting up certificates for the API.
Concatenating pem files to a single file:
cat *.cert.pem > node_extra_ca_certs
Mount the generated file using and provide the environment variable.
The two supported target nodes require different approaches.
Assumed the host already trusts the CA, when adding a node the install script is visible, you can use this script also to add extra behavior, if it is not explicitly implemented in the UI.
The component allows us to provide additional CA files using the SSL_CERT_FILE
variable, it expects one crt file that you have to mount from the host.
The last command of the script will be extended with two lines:
Make sure you use the correct values.
You have to modify the install script to mount the crt file to crane.
You can create a secret from a file using this one liner, make sure it is in the dyrectorio namespace.
Storage capabilities don't cover configuration backup storage. It's sole purpose is to offer a way for .
You can set up the S3 implementations mentioned above as Docker containers. Find out how in the section.
Use .
Use .
Use .
More details in .
This freedom comes with a trade-off. While we'll still offer support on our server, we take no responsibility for maintaining your own instance of the platform and we won't prioritize giving support over other users. Make sure you use the latest version for a consistent experience.
While we won't put a cap on the number of nodes, deployments, projects you manage with your self-managed instance, as with every self-managed software, environment or database related problems might occur, which we take no responsibility for. For a seamless experience, give the a look.
Docker installed on your system but works, too.
Step 1: By using command line – posix shells, Git Bash or PowerShell –, pull dyrector.io's by executing git pull
, if you don't already have the repository on your local machine you have to clone the repository by executing git clone https://github.com/dyrector-io/dyrectorio.git
.
The dyo-stable prefix can be changed in the file of the application.
When you contribute to the , you can turn off crux and crux-ui with the global options listed below or overriding the values in the settings file.
If you have additional questions or ideas for new features, you can or a new discussion on our CLI’s open-source repository. You can also chat with our team on .
The environment variable is NODE_EXTRA_CA_CERTS
, the process expects a concatenated list of your certificates. More info:
Using Kubernetes, you have to make sure that nodes already trust your CA: .
Proxies provide secure connection when you set up the platform for self-managed use. But they can be useful for any other uses, when you need a firewall, or you'd like to hide your location, and so on.
When you set up the platform, we highly recommend you to use a proxy, such as Traefik or NGINX, to secure your network.
Traefik is used by default, as seen in the docker-compose designed for production use.
By default we recommend using Traefik but if you already use NGINX then here's an example.
When you configure NGINX for the platform, keep in mind the following:
Inbound traffic needs to be directed towards 3 containers: kratos, crux-ui, and crux. The 5 locations defined are below:
/crux-ui
/kratos (needs to be stripped)
Locations routed to crux-ui:
/api/auth
/api/status
Locations routed to crux:
/api
The platform isn’t a CI/CD tool but it covers certain steps of CD and its release management related aspects.
Both GitLab’s CI/CD and GitHub’s Action tools can be integrated. Its main benefit is that the platform enables you to manage multi-instance deployments to different environments, and it enables you to manage several different configurations. You can bring multiple services and operational practices under the same hood.
Deployments: you can integrate the services and pipelines you already use to build your applications, and then deploy them to any desired environment.
Change log generation: increase the transparency of version management by generating change logs based on commit messages left by your developers.
Configuration management: simplified configuration management to keep your infrastructure under control while you focus on delivering value to your users.
Secrets management: HashiCorp’s Vault integration enables you to store your sensible keys, configuration variables, tokens and other things secure.
Monitoring: get notifications and alerts on multiple channels about what happens on your infrastructure.
Integrate Prometheus to track and monitor your application’s and infrastructure’s performance.
Make the data tracked by Prometheus visible and easy-to-interpret to non-technical stakeholders with Grafana.
Create and manage logs of events occurring for analytics purposes with Graylog.
Find out what's in the works on GitHub, or check out coming features and integrations we have in mind at the links below.
Send a feature request to hello@dyrector.io or drop a line on our Discord server.
Manually triggered deployments at the end of a CD pipeline. Related features: scheduled releases. Configure your release and schedule a deployment time for the designated environment when the deployment won't be disruptive to users.
Misconfiguration induced failures are very common, especially when you don't have to regularly deal with configuration variables. Add human factor to the equation and the chances of outages induced by faulty configurations increase significantly.
For this reason we're working on a feature that allows users to manage bundled configurations. This way you'll be able to treat your configurations in a templatized manner.
Unawareness of how one version of your image is different from another is a risky practice. To avoid this, the platform helps you create changelogs based on your commit messages with Conventional Commits service. Make sure developers working on your application leave meaningful commit messages, so everyone working on your project understands the details of each version.
Changelog generation can reduce knowledge gap between stakeholders, like developers who work every day on the project and outsiders who occasionally deal with the project, like decision-makers.
Besides the changelogs, you can also leave a comment related to nodes and projects on the platform which your teammates can see on the platform's UI.
Auto send changelogs: once the changelog is generated, you can send it via e-mail or chat, whichever your team uses to communicate. For further information, check the ChatOps section.
You can use HashiCorp’s Vault Integration for secrets management to keep your passwords and encryption keys secure. This means the platform compliments your DevSecOps practices. Your secrets will be securely stored by HashiCorp, which you can access through the platform.
HashiCorp Vault is the de facto tool of security, used by most organisations. We encourage our users to use it, as well.
General Secret Storage: store your secrets, including sensitive configurations, tokens, API keys. Query them by using vault read
.
Employee Credential Storage: instead of using sticky notes around your screen, store and distribute credentials in one place. Vault has an audit log mechanism, which lets you know who had access to one of the stored secrets. This simplifies monitoring which keys have been rolled or not.
API Key Generation for Scripts: generate temporary access keys for the duration of a script. The keys only exist for that duration and are logged by HashiCorp.
Data Encryption: aligning with the purpose of the platform, HashiCorp’s Vault enables developers to focus on developing. The Vault takes care of data encryption and decryption, instead of the developers and other technical staff on the team.
Role based access control allows you to manage the privileges of other users. This is an extra measure of security to your infrastructure and data. Using RBAC helps you to avoid situations when someone has unauthorised access to your account to execute harmful actions.
Principle of Least Privilege: consider the tasks of each user having access to your infrastructure. Only give them privileges necessary to complete their tasks, including modifying and managing configurations.
The platform's functionality implemented into ChatOps commands.
In case you decide to stop using the platform, you can generate YAML files to easily setup their services using another platform without using data.
The platform is currently in the making, and so is its pricing. The public alpha hosted by us is completely free. If you want to give the platform a try hosted by yourself, you can do it, free of charge, forever.
Our plan with pricing is to have one, limited free package, and multiple paid plans with different usage caps. Paid users will have access to prioritized support compared to free users, and they're going to be able to manage more nodes, projects, and deployments in the platform.
Introduced node connection improvements: node kick option is added, fixed an issue with updated agents getting kicked, and added an agent connection mechanism which will attempt connection with the stored token first, then the environment token if the first attempt fails. Delivered a fix to agents occasionally crashing when a container is deleted. Fixed deployment logs and container events logs overwriting each other. Added Rocket.Chat and Mattermost notification integrations. Added working directory as a container option. Integrated PostHog for quality assurance purposes (more details about it here) - tracking can be disabled. Private Docker Hub registries are now supported. Added healthchecks to CLI upon up
command. Added show icon feature to relevant pages. Minor fixes and improvements.
Shouts to chandhuDev for his contribution in this release.
More details about this release on GitHub.
Container settings (docker inspect) are now available in the platform. Updated deployment process screen with a progress bar. Container config fields are node type based now. Various fixes and updates: ory/kratos identity listing, unnecessary websocket error toasts, audit event filtering, key-value input error messages. Other fixes and improvements. Thanks to our Hacktoberfest contributors:
PapePathe
pedaars
GuptaPratik02
harshsinghcs
akash47angadi
More details about this release on GitHub.
Improved yup validation on the UI of the platform. Agent improvements: added updating
status to agent, update button is disabled when there's no update available, fixed an agent update issue when the agent stuck at an older version. Deployments are listed on the node detail page. Fixed a deployment issue when secrets are copied to a different node's deployment. Other fixes and improvements. More details about this release on GitHub.
Reworked agent connection handling to offer a more secure and stable user experience for node management. Added category labels to the platform's containers for better usability. Stack's Go toolchain is upgraded, deploymentStrategy is now utilized, and port routing is explicit. Implemented a fix for port range exposion. Minor fixes and improvements. More details about this release here.
Implemented two new capabilities: configuration bundles and protected deployments. Configuration bundles are configuration templates you can apply to other stacks you manage with the platform. Protected deployments prevent overwriting certain stacks on an infrastructure. Self-managed dyrector.io stack now pulls the latest image when a new version is available. Made several improvements to the UI of the platform: added deployment creation card, table sorting, and images are listed now on the page of a registry. We turned image and instance configuration settings more distinct from each other. Improved sign up and team validation workflow. Added MLflow template. Minor fixes and improvements. More details about this release here.
Added team slug to API endpoints. Implemented node check before deletion. Self-managed dyrector.io improvements: added HEALTCHECK directives to self-managed dyrector.io images, upgraded ory/kratos to 1.0 in dyrector.io stack. dagent improvements: host rule removed when no domain is given, unix socket based healthcheck. Configuration screen improvements: renamed ingress to routing in container configuration to simplify domain specification in config editor, swapped internal and external port inputs, port validation fixes. Made improvements to teams. Other fixes and improvements. More details about this release here.
Implemented fixes to dagent related issues, deployment token migration. UI improvements. More details about this release here.
Local images can be added as unchecked registry. Fixed a bug that prevented users from generating a CD token. Minor fixes. More details about this release here.
Made continuous deployment token improvements: name can be added to tokens for better usability, CD events show up in audit log, CD token has never expire option. Social sign-in is now available: GitHub, GitLab, Google, Azure. Fixed node edit bug. Made improvements to agent, onboarding checklist. Minor fixes and updates. More details about this release here.
Added deployment tokens to trigger CD pipelines. Versionless projects can be converted to versioned. You can select what images you'd like to deploy. Improved registry workflow. Added reload when Kratos isn't available. Small UI improvements. Minor fixes and updates. More details about this release here.
Added crane to signer image and add additional cache restore. More details about this release here.
Implemented principle of least privilege RBAC when managing a Kubernetes cluster through the platform. Improvements to node setup flow, container management, dagent registry auth. Minor fixes and improvements. More details about this release here.
The release includes a fix for minor versioning in the CI process and a change in the release script to incorporate the version of Golang components. More details about this release here.
Added onboarding checklist to the dashboard to guide users through deployment process. Automated multiarch builds to dyrector.io agent, including ARM. Renamed products to projects and their types: simple to versionless, complex to versioned. Improved audit logs with agent connectivity data. Rolling projects are now copyable. Fixes and improvements. More details about this release here.
Fixes and improvements to secrets, private V2 registry addition to the platform, and container view UI improvements. More details about this release here.
Minor fixes. More details about this release here.
We made various improvements to the project codebase, including adding an offline bundle Makefile target for offline development. We also enhanced the documentation by refactoring the README.md
file, including FAQs and a CLI docs link. Unused texts were removed, making the readme more concise. To improve usability, we enhanced the API descriptions and examples in the web documentation. We also introduced container config annotations for greater container configuration flexibility. Deployment management and tracking were improved with the implementation of deployment and event index functionalities.
We resolved a team invite captcha error and introduced code formatting for better readability. For logging and monitoring, we implemented HTTP and WebSocket audit log functionalities. Documentation organization was enhanced by adding .md files, and we improved the pull request labeling process. Title validation for pull requests was implemented, and OpenAPI descriptions and UUID parameter handling were validated. A PR labeler was added to automate labeling, and a deployment events API was introduced. In the UI
module, we made the signup page responsive and implemented reCAPTCHA for team invites to enhance security.
More details about this release here.
This version includes various bug fixes, template refinements, and updates to container IDs and UI status. Additionally, technology labels were added to templates and a demo video was introduced in the release.
More details about this release here.
This release includes a number of bug fixes and new features across various components of the software stack. Notably, the crane
component was fixed to use the original containerPreName
as a name for namespace, the web component now has a Gitea template and a PowerShell script, and the agent
component now has an option to not have CPU limits. The ci
component has also been updated with new e2e testing
and image building capabilities, as well as improved image signing and push to DockerHub. Other improvements include new templates and health checks, as well as updates to dependencies and minor UI fixes.
More details about this release here.
Agents - dagent and crane - support ARM powered nodes now. New templates are added: Gitea & Minecraft server. Minor fixes to deployments, JSON editor and mapper.
More details about this release here.
Dashboard is available to get a quick glance of the latest deployments and statistics. Templates are now available for 5 applications: WordPress, Strapi, self-managed GitLab, Google Microservices Demo, LinkAce. Configuration management improvements and fixes: previous secrets are reused, listing glitches of variables are fixed in filters. Improvements to dagent and crane: distroless image, abs path support for mounts, container builder extra hosts.
More details about this release here.
dagent updated: posix, mingw64 is supported now. dagent updates and deletes itself when new version of dagent is released. CLI improvements: verbose Docker & Podman error messages when CLI is setup. Config updates: common, Kubernetes & Docker variable filters & new variables (labels & annotations) added, unmatched secrets are invalidated, users can now mark secrets as required. Deployment updates: flow fixed, new capabilities (deletion & copy) available. Link navigation fixes in signup email. crane registry auth update.
Thanks to our Hacktoberfest contributors:
SilverTux (add unit test for crane clients),
joremysh (add unit tests for agent/internal/crypt),
oriapp (replaced all try-catch's with catch (err)),
tg44 (zerolog introduced),
raghav-rama (add new docker container builder attribute: shell),
minhoryang (crane init for private-key generate on k8s, crane init at k8s deployment for shared secret, goair for new watch tool and makefile modified),
clebs (Add unit tests for golang/internal/mapper)
659 files changed, 48182 insertions(+), 24050 deletions(-)
More details about this release here.
Fixed Prisma segfault.
More details about this release here.
CLI tool developed for quick local setup. Read here how to use CLI.
Container configuration & secret management improved. Google Container Registries are supported. Notifications are implemented for Discord, Slack and Microsoft Teams to notify teammates of new Nodes, Products, deployment statuses and new teammates. Google Microservices Demo is available with DemoSeeder implementation. Agent related improvements. E2E tested with Playwright. Improved Audit log with pagination and server side filtering. Status page available to check the statuses of the services of the platform. User facing documentation is available. Minor glitches and bugs fixed.
More details about this release here.
Vital fixes and cleanups. Extended & actualized unit tests in agent.
More details about this release here.
Migration into a monorepo on GitHub to measure up to open-source requirements. Automations and multiple platform support – Apple Silicon, Windows – is now available to provide a convenient developer experience. Agent's install script is added with MacOS support. Guidelines of contribution – code of conduct and README.
More details about this release here.
Our main goal is to provide you with fast and reliable open-source services. To achieve this, we collect performance data about the navigation and the application feature usage, and send anonymous usage statistics to our servers. This data helps us track how changes affect the performance and stability of our open-source service and identify potential issues.
We are fully transparent about what data we collect, why we collect it, and how it's transmitted. The source code for the telemetry package is open-source and can be found here. If you do not want to share telemetry data and help improve our projects, you can opt out of this feature.
To provide a clear understanding of why we collect data, how it's collected, and what we do with it, as well as real-world examples of how this data has improved our projects, let's break down the data processing pipeline:
Telemetry data is collected from the browser.
This data is periodically sent to eu.posthog.com
.
The data is stored and analyzed using the PostHog platform.
Our data processing pipeline has been designed with specific goals in mind:
To track the number of dyrector.io installs.
To understand which features are in use and how they are utilized.
To evaluate the frequency of specific feature usage.
To detect issues introduced by new features, such as buggy releases.
You have the option to disable quality assurance features, also known as telemetry, by using the environment variable QA_OPT_OUT=true
. Disabling telemetry doesn't have any drawbacks, except that it prevents us from making improvements to the project.
In order to safeguard your privacy, we take several measures to protect your data:
We are unable to access or store the IP address of your host or users. Our PostHog project's "Discard client IP data" is active, thus we are not able to identify who sent the data. You can find a comprehensive list of transmitted URL paths in the Request Telemetry section.
We do not transmit any environment information from the host except for:
Operating system (e.g., Windows, Linux, OSX)
Target architecture (e.g., amd64, darwin, ...)
Display dimensions
Browser data (e.g., Firefox, Chrome, English, browser version)
All this information is stored in an aggregated format without any personally identifiable data, ensuring your privacy is protected.
To facilitate the identification of installations each running instance is assigned a unique identifier generated using a Universally Unique Identifier (V4). This identification process is triggered when we are confident that the instance is not a test instance, such as a tutorial or a local installation.
The system metrics we collect include:
$os
: The operating system of the user's device.
$browser
: The web browser used by the user.
$device_type
: The type of device used, such as "Desktop" or "Mobile."
$browser_version
: The version of the web browser.
$browser_language
: The language of the web browser.
$screen_height
: The height of the user's screen in pixels.
$screen_width
: The width of the user's screen in pixels.
$viewport_height
: The height of the viewport in pixels.
$viewport_width
: The width of the viewport in pixels.
$lib
: This'll always be web.
$lib_version
: The type of PostHog library.
$insert_id
: An identifier associated with the insertion.
$time
: A timestamp associated with the event.
distinct_id
: A distinct identifier for the user or event.
$device_id
: The identifier associated with the user's device.
$groups
: A dictionary of group identifiers.
These measures allow us to effectively manage and group installations while maintaining data security and privacy.
The full code-base is open source.
Portainer is a popular Docker GUI for containerized applications. It supports Docker, Docker Swarm and Kubernetes orchestrated runtimes.
The most important difference between Portainer and dyrector.io is the feature-rich release management capability the latter offers for containerized applications. You're able to configure and deploy your application's containerized stack with dyrector.io.
While you can trigger multi-instance deployments with dyrector.io, Portainer offers single-node deployments.
Portainer is very useful for teams who would like to interact with containerized applications and their infrastructure in a simple manner. Its Community Edition leaves room for many Docker specific use cases, but some premium features are only available in the Business Edition which offers limited usage for free users.
dyrector.io offers more for teams who develop their own software and want to manage its versions as a containerized stack, and then deploy them to a single or multiple deployment targets with a single click. Self-managed dyrector.io is 100% open-source without usage limitations.
Unfortunately we don't support applications that don't run in a containerized environment.
Short: Generic configuration => Image, specific configuration => Container config, configuration is inherited from Image configuration.
Parameters that are generic and context independent should be defined on the Image level. Other, context dependent information, like an environment variable PUBLIC_URL="https://example.com" should be defined on the Deployment's level. During deployment there is a one-way merge using these configurations with Container configuration having the higher priority.
Yes.
Unfortunately there's no such capability within the platform now, but creating a similar functionality is in our plans.
Unfortunately routing isn't managed by the platform itself. When you'd like to access your node or a deployed stack through a domain, you'll need to configure routing on your own.
dyrector.io is for anyone who works with containerized applications. That means organizations, or independent developers can gain advantage of the platform's functions.
We needed a solution for self-service release management capabilities for an entirely different project. We couldn't find one that fit exactly our needs, so we made our own platform.
No. It can be configured with any environment, cloud or on-premises, Docker or Kubernetes. Read how you can .
Yes. Find the NGINX example .
Unfortunately no, but there are settings you can use to disable Kratos. More details .
The platform provides , but it doesn't offer CI, as of now. Such capabilities are in our long-term plans.
Self-managed use will stay 100% free and unrestricted. But most of our team works full-time on the platform. While we gain some revenue as a cloud consultancy, we accept to fund the project.
Don’t hesitate, reach out to us at . Also drop us a mail to the same address if you find a bug or any other anomaly you experience. We’ll respond within 24 hours.
Send us an e-mail at or check in on .
Join our community to discuss DevOps, Kubernetes or anything on our public .
You can find the project's GitHub repository . Feel free to contribute!
Check our to read posts written by our team of DevOps specialists to improve your processes.
Follow us on , , and to stay updated about the latest developments.
If you got any question or feedback, let us know at .
Supports Docker & K8s
✅
✅
Continuous Deployments
✅
✅
Chat notifications
❌
✅
Multi-instance deployments
❌
✅
Open-source
Premium functions restricted
100% open-source
Lists every registries available in the active team. Request must include teamSlug
in URL. Response is an array including the name
, id
, type
, description
, and icon
of the registry.Registries are 3rd party registries where the container images are stored.
Lists the details of a registry. Request must include teamSlug
and RegistryID
in URL. registryId
refers to the registry's ID. Response is an array including the name
, id
, type
, description
, imageNamePrefix
, inUse
, icon
, and audit log info of the registry.
Returns a project's details. teamSlug
and ProjectID
needs to be included in URL. The response should contain an array, consisting of the project's name
, id
, type
, description
, deletability
, versions and version related data, including version name
and id
, changelog
, increasibility.
Returns an array containing the every version that belong to a project. teamSlug
and ProjectId
must be included in URL. ProjectId
refers to the project's ID. Details include the version's name
, id
, type
, audit
log details, changelog
, and increasibility.
Returns the details of a version in the project. teamSlug
and ProjectId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID. Details include the version's name
, id
, type
, audit
log details, changelog
, increasibility, mutability, deletability, and all image related data, including name
, id
, tag
, order
and configuration data of the images.
This call deletes a version. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID.
No body
This call turns a version into the default one, resulting other versions within this project later inherit images, deployments and their configurations from it. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID.
No body
Fetch details of images within a version. ProjectId
refers to the project's ID, versionId
refers to the version's ID. Both, and teamSlug
are required in the URL.Details come in an array, including name
, id
, tag
, order
, and config details of the image.
Fetch details of an image within a version. projectId
refers to the project's ID, versionId
refers to the version's ID, imageId
refers to the image's ID. All, and teamSlug
are required in the URL.Image details consists name
, id
, tag
, order
, and the config of the image.
Delete an image. projectId
refers to the project's ID, versionId
refers to the version's ID, imageId
refers to the image's ID. All, and teamSlug
are required in the URL.
No body
List of teams consist of name
, id
, and statistics
, including number of users
, projects
, nodes
, versions
, and deployments
.Teams are the shared entity of multiple users. The purpose of teams is to separate users, nodes and projects based on their needs within an organization. Team owners can assign roles. More details about teams here.
Get the details of a team. Request must include teamId
, which is the ID of the team they'd like to get the data of. Data of teams consist of name
, id
, and statistics
, including number of users
, projects
, nodes
, versions
, and deployments
. Response should include user details, as well, including name
, id
, role
, status
, email
, and lastLogin
.
This call sends a new invitation link to a user who hasn't accepted invitation to a team.Request must include teamId
, userId
. Admin access required for a successful request.
No body
Get the list of deployments. Request needs to include teamSlug
in URL. A deployment should include id
, prefix
, status
, note
, audit
log details, project name
, id
, type
, version name
, type
, id
, and node name
, id
, type
.
Get details of a certain deployment. Request must include teamSlug
and deploymentId
in URL. Deployment details should include id
, prefix
, environment
, status
, note
, audit
log details, project name
, id
, type
, version name
, type
, id
, and node name
, id
, type
.
Request must include teamSlug
, deploymentId
and instanceId
, which refer to the ID of a deployment and the instance, in the URL. Instances are the manifestation of an image in the deployment. Response should include state
, id
, updatedAt
, and image
details including id
, name
, tag
, order
and config
variables.
Request must include teamSlug
, deploymentId
and instanceId
, which refer to the ID of a deployment and the instance, needs to be included in URL. Response should include container prefix
and name
, and publicKey
, keys
.
Request must include teamSlug
and deploymentId
in the URL. Response should include an items
array with objects of type
, deploymentStatus
, createdAt
, log
, and containerState
which consists of state
and instanceId
.
Fetch data of deployment targets. Request must include teamSlug
in URL. Response should include an array with the node's type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, and name
.
Fetch data of a specific node. Request must include teamSlug
in URL, and nodeId
in body. Response should include an array with the node's type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, name
, hasToken
, and agent installation details.
Request must include the teamSlug
in URL, and node's name
in body. Response should include type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, name
, hasToken
, and install
details.
Request must include teamSlug
in URL, and its body must include skip
, take
, and dates of from
and to
. Response should include an array of items
: createdAt
date, event
, and data
.
Request must include nodeId
and prefix
. Response should include id
, command
, createdAt
, state
, status
, imageName
, imageTag
and ports
of images.
Request must include skip
, take
, and dates of from
and to
. Response should include an array of items
: createdAt
date, userId
, email
, serviceCall
, and data
.
Request must include teamSlug
and notificationId
parameters in URL. Response should include type
, enabledEvents
, id
, name
, url
, active
, and creatorName
.
teamSlug
is required in URL. Response should include users
, number of auditLogEntries
, projects
, versions
, deployments
, failedDeployments
, details of nodes
, latestDeployments
and auditLog
entries.
Get the details of a storage. Request must include teamSlug
and storageId
in URL. Response should include description, icon, url, id
, name
, accessKey
, secretKey
, and inUse
.
To add a new registry, include teamSlug
in URL, body must include name
, type
, description
, details
, and icon
. Type
, details
, and name
are required. Response is an array including the name
, id
, type
, description
, imageNamePrefix
, inUse
, icon
, and audit log info of the registry.
Modify the name
, type
, description
, details
, and icon
. RegistryId
refers to the registry's ID. teamSlug
and RegistryID
is required in URL, body must include type
, details
, and name
.
No body
Create a new project for a team. teamSlug
needs to be included in URL. Newly created team has a type
and a name
as required variables, and optionally a description
and a changelog
.
Updates a project. teamSlug
is required in URL, as well as projectId
to identify which project is modified, name
, description
and changelog
can be adjusted with this call.
No body
Creates a new version in a project. projectId
refers to the project's ID. teamSlug
and ProjectId
must be included in URL, request's body need to include name
and type
of the version, changelog
is optionable. Response should include the name
, id
, changelog
, increasibility, type
, and audit
log details of the version.
Updates a version's name
and changelog
. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID.
No body
Increases the version of a project with a new child version. teamSlug
, ProjectId
and VersionId
must be included in URL. projectId
refers to the project's ID, versionId
refers to the version's ID. name
refers to the name of the new version, and is required in the body.
Add new images to a version. projectId
refers to the project's ID, versionId
refers to the version's ID. These, and teamSlug
are required in the URL. registryId
refers to the registry's ID, images
refers to the name(s) of the images you'd like to add. These are required variables in the body.
Modify the configuration variables of an image. projectId
refers to the project's ID, versionId
refers to the version's ID, imageId
refers to the image's ID. All, and teamSlug
are required in the URL. Tag
refers to the version of the image, config
is an object of configuration variables.
No body
Edit image deployment order of a version. projectId
refers to the project's ID, versionId
refers to the version's ID. Both, and teamSlug
are required in the URL. Request body should include the IDs of the images in an array.
No body
Request must include name
, which is going to be the name of the newly made team. Response should include name
, id
, and statistics
, including number of users
, projects
, nodes
, versions
, and deployments
.
Request must include teamId
, email and firstName
. Admin access required for a successful request.Response should include new user's name
, id
, role
, status
, email
, and lastLogin
. Admin access required for a successful request.
Request must include teamSlug
in URL, versionId
, nodeId
, and prefix
, which refers to the ID of a version, a node and the prefix of the deployment, must be included in body. Response should include deployment id
, prefix
, status
, note
, and audit
log details, as well as project type
, id
, name
, version type
, id
, name
, and node type
, id
, name
.
Request must include teamSlug
, deploymentId
, instanceId
in URL, and portion of the instance configuration as config
in the body. Response should include config
variables in an array.
No body
Request must include teamSlug
and deploymentId
in the URL, which will be copied. The body must include the nodeId
, prefix
and optionally a note
. Response should include deployment data: id
, prefix
, status
, note
, and miscellaneous details of audit
log, project
, version
, and node
.
Request must include teamSlug
and deploymentId
in the URL. In the body a name
and optionally the expiration date as expirationInDays
.
Request must include the teamSlug
in URL, and node's name
in body. Response should include an array with the node's type
, status
, description
, icon
, address
, connectedAt
date, version
, updating
, id
, and name
.
Request must include the teamSlug
in URL, and node's name
in body, body can include description
and icon
.
No body
Request must include teamSlug
in URL and nodeId
, type
, and scriptType
.
Request must include teamSlug
in the URL, type
, enabledEvents
, id
, name
, url
, and active
in the body. Response should list type
, enabledEvents
, id
, name
, url
, active
, and creatorName
.
Request must include teamSlug
in the URL, type
, enabledEvents
, id
, name
, url
, and active
in the body. Response should include type
, enabledEvents
, id
, name
, url
, active
, and creatorName
.
Request must include type
, id
, and name
. Response should include id
, name
, description
, type
, and audit
log details of templates.
Creates a new storage. Request must include teamSlug
in URL, body is required to include name
, and url
. Request body may include description
, icon
, accesKey
, and secretKey
. Response should include description
, icon
, url
, id
, name
, accessKey
, secretKey
, and inUse
.
Updates a storage. Request must include teamSlug
and storageId
in URL. name
, and url
must be included in body. Request body may include description
, icon
, accesKey
, and secretKey
.
No body
Cal.com is a meeting scheduler application. Users can set up its self-hosted stack.
Cal.com is awesome. We like it and use it everyday, people like it, too. Chances are, if you're here, you like it, as well. Even better that they provide self-hosted usage. But as some users pointed out on Hacker News, self-hosting Cal.com is a challenging process. So, we turned it into a template for easy setup.
After the Node where you'd like to run Cal.com is registered, you can set it up by following the steps of deployments as documented here.
cal-db (latest)
POSTGRES_PASSWORD
has to be specified.
cal-com (2.5.10)
DATABASE_URL
needs to contain POSTGRES_PASSWORD
's value in postgresql://cal-user:${
POSTGRES_PASSWORD
}@cal-db:5432/cal-db
for cal-db.
NEXTAUTH_SECRET
and CALENDSO_ENCRYPTION_KEY
needs to be specified. We recommend OpenSSL to generate these secrets.
If you have a node with Traefik enabled you can use http://cal.localhost
(or any other domain setup in the ingress settings) by setting NEXT_PUBLIC_WEBAPP_URL
to the public URL.
Once the deployment is successful, self-hosted Cal.com is ready to use at localhost:7000 by default, as seen below.