DeployBot Blog 2020-12-21T11:26:13+00:00 New Feature released: Server Page and File Browser 2020-11-02T11:06:00+00:00 2020-11-02T11:17:34+00:00 Heike Jurzik Our developers are constantly working on improving DeployBot and creating a great user experience. As a freelancer or small agency you probably know a few things about UX design and how important it is to turn complex problems into simple solutions. In our opinion, good UX design also means that the application interface offers help and guidance during the setup process. In DeployBot that includes configuring repositories, environments and servers. 

After putting some work into the navigation design and improving the DeployBot sidebar, we added a feature which lets you view detailed log files of your deployments. We've now taken a step back to improve the configuration process. The setup of servers and the destination path for your deployments still happens in one configuration dialog, but in two separate steps.

Connect a Server

So, let's get straight to the point and have a look at the new setup. After you've decided to add a new server to an environment (Atomic SFTP or SFTP), you enter a name, the host name or IP address, the port number, and your credentials – nothing new here. Before you can define the destination path for your deployments, though, you have to click the Connect button to make sure DeployBot can connect to the remote system.

server configuration dialog

DeployBot is running a test and checks if it can log into the remote machine with the provided user credentials. If everything is correct, you should see the message:

"Well done! Your server has been successfully connected."

If not, please make sure that the username and password match. If you’re using a public SSH key for authentication and see an error message in DeployBot, double check that the key has been added to the file ~/.ssh/authorized_keys and that the permissions are set correctly.

Note: Keep in mind that you can only modify a server’s settings when you’re disconnected. If you’d like to change the credentials or the name shown in the dashboard, please click the Disconnect button first, change the settings, and re-connect.

Enter the Destination Path

Only when DeployBot is connected to the server, you can enter the destination path. Please be extra careful: this is where your files will be deployed to. The standard setup is your home directory on the remote machine ($HOME/), so you probably have to adjust this and enter the path to the website you want to deploy.

If you already know the exact path name, you can simply enter it in the field Destination path. If you're unsure, our new file browser is here to help. Click the button next to the field to open a new window that looks like a simple file manager. DeployBot needs a short moment to fetch the directory structure of the remote system and then displays the home directory of the user who's logged in (including hidden files and directories, which names start with a .).

the new file browser

To open a folder, use a single mouse click. We've also added some links at the top of the file browser: root (changes to the root folder, /) and home (which brings you back to the home directory). 

Tip: To navigate back to the previous folder, simply click on the link next to Current path. Once you've found the correct destination path, you can confirm it via Use this path in the top right corner.   

Before you can save the settings, please perform a write test with the button of the same name. DeployBot then temporarily creates a directory in your destination path and immediately removes it again. If everything works out, you should see something like "Good work! The write test was completed successfully." If not, you can click the button Edit path to change the current settings.

Happy to help!

We hope you like the new feature and find it easier to set up a new server and select the correct destination path. If you have any suggestions or comments, please get in touch – happy deploying!

New GitLab Integration for DeployBot 2020-06-22T11:13:00+00:00 2020-08-05T15:27:34+00:00 Heike Jurzik GitLab's popularity has been steadily increasing over the last few years with freelancers and small web agencies, not just since Microsoft bought GitHub in 2018. Maybe you already have a account, maybe you're planning to create one – connecting it to DeployBot has become a lot easier now. What's even more important, our new GitLab integration via GitLab's API makes sure that the correct webhook is created automatically when connecting the two platforms, so GitLab can inform DeployBot as soon as there are new commits in your repos. Previously, it was necessary to create the webhook manually.

When connecting a new repository via DeployBot's dashboard, you can see the new tab GitLab, next to GitHub and Bitbucket. GitLab is no longer hiding in the Others section. (Note: This only refers to repositories, connecting self-hosted GitLab instances still happens in Others.) 

adding a GitLab respository in DeployBot

The rest of the configuration is basically the same:

  • Connect your GitLab account.
  • Choose a repository from the drop-down menu.
  • Add a title of your choice. (optional)
  • Choose a color label. (optional)
  • Click Connect.

Since DeployBot is now talking to the GitLab API, it creates a new webhook automatically. This webhook tells DeployBot about new commits and they appear in your dashboard.

DeployBot dashboard, ready to be deployed

If you've configured a staging environment with automatic deployment, there is no need to click the Deploy button when you’re ready to release – DeployBot handles those deployments for you, like any good CI/CD tool should do.

You can find the configured webhooks for the GitLab repositories in Settings / Webhooks & Badges. From there, it's easy to copy & paste them, for example, if you're configuring the Chrome Extension which allows you to deploy manually from the web browser.

copy & paste of the webhook into the DeployBot Chrome extension

With the new GitLab integration you don't have to worry about a thing. Enjoy, and happy deploying!

New Feature: Log Files for DeployBot 2020-04-06T13:30:00+00:00 2020-04-06T14:03:41+00:00 Heike Jurzik “Why do my deployments take such a long time?”

"It's been ages since I committed something, why doesn't DeployBot start the automatic deployment? I'm pretty sure I configured everything correctly."

"Why does the deployment to Heroku fail?"

Well, we don't have answers to everything, but we can certainly help you to find answers to those questions. DeployBot has a nice new feature: you can now access detailed log files for each deployment. Those logs are available for all environments, for all servers and commits. You can even analyse and check older commits, since the feature also works retrospectively.

So, where do you find the logs? The fastest way is via the new sidebar:

  1. Select a repository and then one of the configured environments.
  2. For each environment you'll find a menu entry called History.
  3. The history shows a list of all deployments in chronological order. Select the commit you'd like to investigate further.
  4. Right after the commit message you can find the section Deployment Log. Click the button View log to open a new window with the respective log file.

The protocol shows the commit number, the server name, the number of new, updated and deleted files/directories in that commit. Below, you can see a detailed list with timestamps, durations, the performed task (e.g. create, delete, update, ignore, and output) for every file and directory. The right side displays names of files and directories. Here you can also see detailed messages to help you analyse the problem if a deployment fails. At the bottom of the dialogue a message in green or red tells you whether a deployment has successfully finished or failed.

DeployBot log file

Note: For some (older) deployments the first line in the log file shows N/A instead of the actual duration which means this information is not available in the logs. Everything after March 13, 2020, contains the correct duration about the first step.

DeployBot log file
New Sidebar Feature in DeployBot 2020-02-05T11:08:00+00:00 2020-08-05T15:20:39+00:00 Heike Jurzik New Features in DeployBot

How many clicks do you need to get to your profile or notification settings, the integrations with external apps, and your containers? How long does it take you to display all connected repositories, their environments, and the related servers & settings? 

To be honest, I never counted my clicks, I just saved some bookmarks for the pages I needed frequently. With the new sidebar that's no longer necessary, and – let's face it – DeployBot looks so much better now.

The new DeployBot Sidebar

It's not just about the design, though, it also improves DeployBot's usability. After you've logged in, just click on the hamburger button (icon with three horizontal bars) in the top left corner to toggle the sidebar. On the left side you see a tree-like structure; the current page gets highlighted. You can reach the following settings (from top to bottom):

  • DeployBot dashboard
  • Your Profile (including notification settings)
  • Account settings (including plans & billing and the security settings)
  • Repositories (overview of all connected repositories)

For every repository you can also easily reach the settings, for example the permissions, the webhooks & badges, the environments with the server configuration and additional configuration files.

TIP: Click on the pin icon to keep the sidebar open during navigation. If you're searching for a particular setting or repository, use the filter box at the top of the sidebar.

So, I'm pretty happy with the newest addition to DeployBot – the user-friendly layout saves a lot of time and navigating has become a lot easier. Let us know what you think and leave a comment!

3 Types of People that should consider working with an automated Deployment Tool 2020-01-29T11:18:00+00:00 2020-08-05T15:15:43+00:00 Heike Jurzik 3 Types of People that use an automated Deployment Tool: Developer, Operator, and Project Manager

"But I only need to do this once, really, automating things doesn't pay off." That's a phrase often heard in IT, but rarely true. Many things that developers and sysadmins expect to be doing only once become regular tasks, in software development as well as in DevOps. Especially when it comes to building, testing and deploying code, the above statement is outright wrong. "Release early, release often" is a fundamental concept of agile software development, and there is no agility without automation. Automation is also the primary goal of DevOps: It's all about building and delivering new products faster (than your competitors). 

So, if you ask yourself "is this for me?" or "will automation make my work life easier?", then maybe this blog post can answer those questions. Let's have a look at three types of people that should consider working with an automated deployment tool.

The Developer

It doesn't matter if you're a freelancer or employed by a company – as a developer you're mostly responsible for writing code. Your strength is your creativity, your way of thinking, combined with your expertise in scripting or programming languages. Your customers expect you to use that strength to develop new applications or shiny new features for existing apps.

Today, most developers not only think about the design and the actual implementation, they're also involved in operational tasks. That includes managing the code with a version control repository, testing and building the application on different platforms. The good news is: the actual deployment can easily be automated – and it certainly should be.

For you, as a developer, an automated deployment tool brings massive advantages: there is no need to manually track changes and upload files, you can quickly roll back a problematic release, and share automatically generated release notes with team members or customers.

The Operator

As a member of the ops team you might look at things from a different perspective. After all, you're responsible for the infrastructure. Of course, you love working with your colleagues from the dev team, and just like them, you can't wait to present your users with new tools and features. Since you're focusing your activities on keeping the work environment operational, you need to pay attention to other things than the developers do.

Your job is to keep the servers up and running, and deploying a new software version to a production environment is something that might cause trouble. Using an automated deployment tool will help you to define standards, to reduce the manual work, and to increase your company's speed to deliver new software at the same time.

Integrating a tool like DeployBot will automate the entire deployment process. That way you can set up a full deployment cycle with a safety mechanism to avoid unpleasant surprises. It also improves the collaboration between the development and the operations teams, deploying software in a reliable and secure manner.

The Project Manager

Whether you're freelancing, managing a small web agency or a team of developers in a large company, time is probably your most precious resource. You really don't want to spend time on managing outages caused by a problematic release. To minimize that risk, you know that you have to take measures before things become difficult. Establishing a fully automated deployment chain in your team is a significant step down that road.

Even if you're not developing software or operating the infrastructure, DeployBot is here to help. Its graphical user interface will help you to quickly get an overview. Even better: let DeployBot notify you when a deployment succeeds or fails. It can send out email notifications that show the deployed revision range, who deployed the changes, and link to the respective environment. That way, you're always kept in the loop about the latest developments.

Automation for Everybody

Even if setting up an automated deployment tool might take some time and require some effort, it’s worth it. Most of the tools out there can connect to various version control systems, to different servers and environments, and offer integrations for other applications and platforms that you and your team probably already use. 

5 Things you should consider before subscribing to a Continuous Deployment Service 2019-11-21T15:43:00+00:00 2020-08-05T15:09:58+00:00 Heike Jurzik

Are you looking for a way to boost your productivity as a developer? There are methods and best practises in software development everyone talks about, for example Continuous Integration (CI) and Continuous Delivery or Continuous Deployment (CD), often referred to in tandem as CI/CD. Before you sign up for a CI/CD service, there are a few things you should consider, though. In this article we will have a look at five of them.

1. Does it integrate with existing Workflows?

We all are creatures of habit. Habits help us to get through our life, and that includes the work life. That's why it makes sense to look for a service that plays nice with the other tools and applications you're already using. For example, the version control system – it's crucially important that your CI/CD system integrates seamlessly with your preferred VCS. If you're a Git user, then it doesn't matter if you're hosting the code on GitHub, GitLab, or Bitbucket, it's quite easy to connect DeployBot to those platforms or any other Git repository out there

By the way, DeployBot supports more than just network protocols like SSH or Git. Our REST API makes sure that certain events in your connected repositories can trigger actions in DeployBot.

Apart from the VCS, there are other external services you might want to think about. Deploying your code to DigitalOcean, connect AWS or the Heroku cloud? DeployBot supports all those providers and offers additional integrations via the FTP/SFTP protocols. Want to deploy your Shopify store themes? DeployBot is your friend. And there is more coming soon: Native deployments to Microsoft Azure and the Google Cloud Platform (GCP) are on our to-do list.

It's helpful if the CI/CD service supports the technologies your applications rely on. If your code is built with Node.js, for example, it's a good thing when your deployment tool provides assistance for the JavaScript runtime environment. DeployBot is up-to-date when it comes to application development frameworks. It can compile Java code as well as Scala and Go sources. It supports npm, Node.js, and Composer. On top of that, Gulp and Grunt are supported for automated builds. In other words: If your applications require a certain technology, there is a good chance that DeployBot can handle it.

2. Does it support Containers?

So, we’ve talked about workflows and technologies which brings us to the next point: containers. A lot of modern micro-architecture applications are simply not possible without container technology. Containers are the environment where your code is built or tested. A CI/CD service that treats Containers as "first-class citizens" is therefore a good idea.

So, what would that "treatment" look like? The CI/CD service should be able to generate containers based on the content that you pass on to it because most developers don’t want the productive workload to run on their laptops. Since Docker is the de-facto container standard these days, look for a solution that supports Docker and has the ability to build Docker containers. It should also feature uploading built containers to a Docker container registry, such as Docker Hub.

By default, DeployBot comes with its own Docker containers. It's also possible to connect containers from Docker Hub – that's up to you.

3. Does it save Time?

Would you rather concentrate on coding instead of setting up your work environment? Then better spend time on your application, not on your CI/CD tool chain. A lot of CI/CD systems out there are Open Source software and many of them come at no cost for the software itself. While quickly firing up a virtual machine and setting up your own solution may be tempting, have a look at the downside of this approach.

It requires a machine with an operating system, the CI/CD and related services, and, of course, somebody who administers the system. In addition, the CI/CD software itself needs looking after. Regular software updates to fix security-related vulnerabilities are vital. So, why not look for a solution that you don't need to manage yourself? Think about all the free time that you can spend working on your application...

Since DeployBot is hosted on our premises, we take care of the infrastructure. Any required maintenance is done by our experts – no hassle for your, promise!

4. Does it strengthen the Team?

I've already mentioned how important it is that a CI/CD service can integrate your favourite tools and support your work habits. So what about other people? Everytime there is more than one developer involved in a project, he or she has to communicate with the other team members. Many teams use some kind of messaging service to communicate with colleagues and customers these days, for example Slack or Campfire.

It's easy to connect DeployBot to both, Slack and Campfire. That way everybody is always in the loop, knows about new deployments and also gets notified when something goes wrong during a deployment. Not interested in the modern ChatOps world? Well, then configure DeployBot to send out email notifications. Other supported external tools are New Relic, Honeybadger, and bugsnag – they all improve collaboration and team work.

5. Does it cost a lot of Money?

Closely related to the time-saving aspect is the last point on our list: money. A number of (commercial) CI/CD services out there offer tons of features that you will likely never need, but cost a small fortune. Those solutions are targeted at legacy environments, i.e. historically grown setups that come with a large variety of tools and solutions. So, if you’re a freelancer or a small web agency, why not find a service that fits your business model and needs instead of paying giant fees for features you don't require?

DeployBot doesn't ask for giant upfront fees – instead you pay for the things you really need.

The right Choice

Of course, large cloud providers feature their own set of tools for a smooth CI/CD workflow. While it's appealing to rely on those tool chains since they come at no extra cost, keep in mind that being locked-in by a vendor is not ideal. Migrating from one provider to another can cause some grey hair, especially in multi-cloud environments.

So, best evaluate what's out there before you sign up with one or more service providers. Only if the CI/CD service supports your preferred workflow (and that hopefully includes containers!), saves time and money, then it's worth giving it a try. If you're a team of developers, don't forget about the collaboration features. Happy coding!

Version Control Systems and Continuous Deployment Tools: A perfect Fit 2019-09-26T10:18:00+00:00 2020-08-05T15:04:31+00:00 Heike Jurzik Regardless of the size of your project or team, if you want to develop high quality software, you need to choose the right tools. They help you to automate things and enhance your productivity. A proper release management in combination with a version control system is a central component – whether you're a single freelancer, a distributed team of developers, or a small web agency.

In this article we're going to explain briefly what version control is and how a version control system can improve your workflow. After summarizing the differences between continuous integration, continuous delivery, and continuous deployment we're going to make the link: A version control system is one of the key components of a continuous deployment tool – good thing it's easy to connect DeployBot to your Git repositories, whether they're hosted at GitHub, Bitbucket, GitLab, or on your own server.   

So, let’s start with a short explanation on how version control works.

How Version Control works

It doesn't matter if you're working on web sites, if you're coding small scripts or larger software projects – a version control system (VCS) is your friend. Version control, also known as revision control or source control, records and manages changes to files and folders. More importantly, it allows you to look at the history of changes and go back to a previous version if necessary. When it comes to teamwork, a version control system is an indispensable component. It can be used to resolve editing conflicts between several developers working on the same project.

There a various version control systems out there, e.g. CVS (Concurrent Versions System), SVN (Apache Subversion), Mercurial, and Git. Some of them have been around for quite some time like CVS which started in 1990 and is no longer maintained or Subversion that was created almost 20 years ago as a CVS alternative. Git was first published in 2005 and uses a completely different approach than CVS and Subversion: It's a so-called distributed version control system, also known as distributed revision control system, meaning that the complete codebase is always mirrored on every developer's computer.

Let's not look too deep into Git, but mention at least one great feature: branches. The version control system handles different file versions on different branches really well and makes collaboration therefore a lot easier. Separating the master branch from development branches enables team members to develop new features or try out something new without the risk of messing up the stable version(s). Once a branch has been tested and reviewed, it's easy to approve the changes and merge it into the master.

Who needs a Version Control System?

As the name indicates, a VCS stores different versions of your files and folders, so it keeps track of your changes. Especially in larger projects this is a great help as you don't need to decide, which parts you save or how to name the individual versions. Also, it’s not necessary to document the differences in long README files. Instead, write meaningful commit messages, like “fixes typo in index.html”, “fixes bug #5678”, etc. While it’s possible to type anything in those commit messages, it’s good practise to compose them so that other developers can understand the reasons for the code changes. Even if you’re the only person working on a project, try to be comprehensive – commit messages can be a good reminder when you touch your code months or years later. 

A VCS makes sure to store past versions and grants access to any variant at any time (and show the differences). It lets you restore older versions of single files or the entire project if necessary. So, you can sleep better knowing that you can't really destroy a customer's website. Well, to be honest, you can still mess up, but it’s easier and quicker to recover and return to a working state. Besides, using a hosting platform like GitHub, Bitbucket, GitLab, etc. for version control always ensures that there is a backup of your projects online, including their entire history. If the machine you're coding on breaks down, you can simply clone that repository to another machine and continue with your work.

When working in a team, a version control system "isn’t a luxury, it’s a necessity". Forget about shared folders and telling your fellow coders to stay away from file "xyz" because you're working on it. It's not very convenient and will sooner or later result in someone accidentally overwriting someone else's changes. If you use a VCS instead, all team members can work on any file whenever they want and changes can be merged later into a common version.

Continuous Integration, Continuous Delivery, and Continuous Deployment

Before we describe the continuous deployment workflow and the tools involved, let's take a quick look at the terms continuous integration, continuous delivery, and continuous deployment. While a lot of people use them interchangeably, it's important to understand the differences:

  • Continuous Integration (CI): The code is built and tested on a regular basis, i.e. daily, several times per day, or – even better – with every commit.
  • Continuous Delivery (CD): This is the next step, and its goal is to always have code available that can be released at any point. CD uses some automation (building and testing) but requires human intervention in the end when it comes to releasing to a productive environment.
  • Continuous Deployment (CD): All code changes are automatically built, tested, and released. It's similar to continuous delivery but also brings the new version to the production environment without human intervention.

Johanna's article from March 2018 offers a more detailed explanation. It also mentions the pros and cons of all three approaches.

Continuous Integration, Delivery, and Deployment: Workflow

So, you've decided you want to adjust your workflow and you're aiming for continuous integration, delivery, and/or deployment. All three methods rely on a version control system – there is no way around it. Every change to the codebase must be handled by the VCS so that the deployment tool can access it. More precisely, you need to use the VCS and push the current state to a repository after a commit, and the deployment tool takes over from there.

CI/CD workflows are closely linked to other development best practises. We've mentioned the Git branches – separating the tasks and giving each developer his or her own "work space" is a real benefit. They can commit to their branch as often as they like. Once the work is done, it's time to test the code. In DeployBot you can connect up to 50 repositories in the premium version (if you need more, please get in touch). We support GitHub, Bitbucket, Beanstalk, GitLab, and, of course, self-hosted Git repositories via HTTPS or SSH.

After you've connected your branch it's up to you and the other team members how you want to proceed with DeployBot: Quite likely you want to run some tests after you've committed your changes, maybe you want to include one of our predefined Docker containers or create your own. When you're ready, you can create the pull request (GitHub, Bitbucket) or merge request (GitLab) and – as always: Push. Build. Deploy!

Why an automated Deployment Tool is better than FTP for Freelancers 2019-08-07T12:37:00+00:00 2020-08-05T14:57:22+00:00 Heike Jurzik

Imagine, you're a freelance web developer. You've just fixed some bugs or added a new feature to a customer's website. It's time to upload your code and the free Wi-Fi in the cozy café seems convenient and fast enough. You start your graphical FTP client, enter the credentials the customer has given to you, the upload begins… and then: "Connection timed out".

The web shop is offline or something else went wrong, and now you're in serious trouble. FTP is an inefficient protocol, it's old-fashioned and – this is the worst part – it's also vulnerable to attacks. Let's take a look at how FTP works and then at alternatives.

How does FTP work?

The File Transfer Protocol is used to exchange files between two computers via a network connection. It's a client-server architecture: The client identifies itself to the server, usually with username and password. Alternatively, anonymous connections are allowed if the FTP server is configured accordingly. Credentials as well as files are being transmitted in clear text, there is no encryption!

To make things more safe FTP can be secured with SSL/TLS (FTPS). There is also SFTP (SSH File Transfer Protocol or Secure File Transfer Protocol), an extension of the SSH protocol (part of the SSH program suite) to provide secure file transfer capabilities and not to be confused with FTP run through an SSH tunnel.

Unlike most other internet protocols, FTP uses two channels between client and server. The command or control channel (port 21) and the data channel (port 20) are on two separate TCP connections. The command channel usually handles the commands between the server and the client as well as the replies. The actual data transfer happens on the data channel.

FTP also knows two different modes that define how the data connection is established: active and passive. In active mode, it's the server that initiates the connection with the client (after the FTP client has established a connection on the command channel). If this happens the other way round, i.e. the client initiates the data connection with the server, then this is called passive mode.

Why FTP is annoying

The File Transfer Protocol has been around for a very long time. It all started in 1971 when the first version was published. Over the years FTP has seen a number of revisions and improvements, but it still lacks a lot of features and has plenty of flaws:

  1. FTP is insecure
    This protocol was never designed to be secure. We've mentioned the lack of encryption, and there are a number of known exploits. Not just the files are sent in clear text, also the usernames, passwords, and the commands you use in an FTP session. Really, it is quite vulnerable.
  2. FTP is slow
    For every single file you transfer, FTP opens a new data channel, performs a TCP handshake and then starts transmitting – transferring a large amount of smaller files is a pain and FTP users often see messages like "FTP connection timed out" or "Read timed out".
  3. Deleting files via FTP is a pain
    If you've uploaded the wrong files and later try to delete a folder that contains files and more folders in a nested structure, then you'd better be patient. You can't delete non-empty folders via FTP. First, remove all files, then the empty directory.
  4. FTP is not designed for Collaboration
    Are you working in a team? So, which one of you is going to upload the latest code review to your customer's server? Do you all have different accounts, or do you share the username and password? Which brings us back to "1. FTP is insecure"…

Alternatives to FTP

To solve some of the security issues, you can switch to FTPS (same protocol, same problems, with a little extra security) or SFTP which is not really FTP, as mentioned above. It uses a completely different protocol and requires an SSH server instead of an FTP server. If you can find a web hoster that offers SSH access, that's definitely a lot better than FTP. Because SSH and therefore SFTP only use one channel (TCP port 22), this protocol solves performance issues at the same time.

Now, let's look at the collaboration problem. Neither FTP nor SFTP are meant for team work. Even if you're all located in one office and talk about when and what to upload, you really shouldn't share usernames and passwords. Giving the credentials to just one person causes even more trouble when this team member is off sick or on vacation.

Connecting it all: Deployment Tools

Using a deployment tool can improve your workflow and enhance security at the same time. Most deployment tools can connect to a version control system like Git that offers to store a history of all changes, to track code revisions in a repository, to merge different file versions, and, if needed, to restore previous versions. Using a combination of a version control system and a deployment tool improves the collaboration between team members.

The deployment tool also connects repositories of your version control system to one or more servers where you upload your changes to. DeployBot, for example, supports modern and secure protocols with encryption and authentication. Apart from SFTP, you can use our Atomic SFTP feature to add an extra layer of security. Instead of uploading files directly to the defined directory on the remote server, atomic deployments maintain a special directory structure on your server. This allows you to store more than one release and and switch between them if necessary.

If it has to be FTP…

Of course, DeployBot not only supports SFTP or Atomic SFTP, it also connects to  FTP servers if you don't have SSH access. DeployBot supports both FTP modes (active and passive), connects to one or more FTP server with your credentials and uploads files to a specified destination path.

You will still be stuck with the File Transfer Protocol, but you can at least connect a version control system, quickly roll back to previous versions and share the server credentials in a safe environment. So, if you still have customers insisting on FTP, you can make your life easier with a deployment tool and sleep a lot better knowing that you're not going to break a website or a web shop – at least not because of a lousy protocol.

New DeployBot Extension for Google Chrome 2019-07-09T04:00:00+00:00 2020-08-05T14:51:55+00:00 Heike Jurzik

Almost every developer I know is constantly looking for new tools to organize the daily work routine. There are plenty of programs and apps out there, like calendars, time tracking, communication, productivity, or deployment tools. If you're a DeployBot user, you already know how to organize your workflow. You've connected your Git repositories, you've set up one or more environments, maybe you've configured additional build or test routines – it's all right there.

We've now released a new extension for Google Chrome that makes working with your favourite deployment tool even more efficient (when deploying manually, not automatically). Instead of visiting the dashboard, navigating to the right environment, and clicking the Deploy button, you can simply start the deployment from a little drop-down-menu that’s hiding behind the DeployBot icon next to the browser’s address bar.

Install the Extension

Now, if you haven't set up different environments already, you may want to read about the benefits of having testing/staging and production environments. If you're already familiar with this concept, then let's jump right in. Here is how to install and configure the browser extension:

  1. Open the Chrome Web Store.
  2. Use the search box to find our new DeployBot extension (or follow this direct link) and click Add.
  3. Click the new DeployBot icon next to the address bar and choose Settings.
  4. You can now enter up to 10 environments and set their title.
  5. Click Save to finish the configuration. You should then see the message Data saved successfully.

Tip: To find the correct webhooks, open your DeployBot dashboard, choose a repository and then Settings. Open the tab Webhooks & Badges and scroll down. Copy the webhook URL from one of the listed environments.

screenshot of the DeployBot dashboard, showing the environment settings

Ready, Click, Deploy

Since all my staging environments deploy automatically as soon as the connected GitHub repository sees a new commit, I've only added the production environments to the Chrome extension. When I need to (manually) deploy something, I can just click the little DeployBot icon in Chrome and choose the right environment – done. In case something goes wrong, the extension will let you know, so you can go back to the DeployBot dashboard and analyze the problem.

screenshot of the Chrome extension deploying
DeployBot Chrome Extension: Start your Deployment Tool straight from the Web Browser.

By the way, the new extension does not only work with Google Chrome, but also with the Open Source web browser Chromium. So, give it a try – deploying manually with your favourite deployment tool is now just as fast as the automated deployment.

Guest Post: How to Set up and Deploy a Node.js/Express Application for Production 2019-06-29T04:00:00+00:00 2020-08-05T13:56:17+00:00 Johanna Luetgebrune

Node.js is an open source JavaScript runtime environment that is used to build networking and server-side applications easily. The Node.js platform currently runs on Linux, FreeBSD, Windows and OS X. While applications can run at the command line, this tutorial focuses on executing them as a service. Therefore, they will be able to restart automatically in the case of a failure or reboot and can be used within production environments safely. 

As we progress through this tutorial, we will cover setting up a Node.js environment ready for production on a single Ubuntu 16.04 server. The server runs a Node.js application managed by PM2 and gives users secure access through a Nginx reverse proxy. The Nginx server offers HTTPS via a free certificate by Let’s Encrypt.

Install Node.js

To begin, we will install the latest LTS release of Node.js via the NodeSource package archives. To accomplish this, we install the NodeSource PPA to get access to its contents. After ensuring that you are in your home directory, use ‘curl’ to retrieve the Node.js 10.x archives using the installation script. 

We’ll be using the 10.x version because that’s the current LTS release. We recommend updating to 10.x before following this tutorial - to update Node, use nvm. You’ll need a C++ compiler, and the build-essential and libssl-dev packages.

$ cd ~
$ curl -sL -o

The contents of the script can be inspected via ‘nano’, or a text editor you prefer –

$ nano

You can then execute the script using ‘sudo.’

$ sudo bash

After executing the above command the NodeSource PPA will be supplemented to your configuration, and your local package cache should be automatically updated. Once the setup script from the NodeSource repo is executed, install the Node.js package similar to how it was done above -

$ sudo apt-get install nodejs

As the ‘node.js’ package already contains the ‘nodejs’ binary and the ‘npm’, there is no need to install ‘npm’ separately. That said, for certain ‘npm’ packages to work as intended, there will be a need to install the ‘build-essential’ package –

$ sudo apt-get install build-essential

Create a Node.js Application

To start, let’s write a ‘Hello World application that returns ‘Hello World’ to any HTTP request. This will help get your Node.js set up correctly, and you can then replace with an application of your choice. We’ll be using the Express package to keep things simple.  

To begin, you need to create a directory and initialize your project.

$ mkdir myapp && cd myapp

$ npm init

You will be asked a few questions. You can go with the default answers by attaching the -f flag. We’ll use app.js as the entrypoint for our application.  Next, add express to the project we created earlier. Here, we’ll use npm. Npm is a package manager for Node related modules. We can use it to install modules and other stuff.

$ npm install express --save

Open your Node.js application and  use ‘nano’ to edit app.js:

$ cd ~
$ nano app.js

You now need to insert the code below in the file. You also have the option to replace the port, 8080, in both locations. Ensure that the port you choose to use is not an admin port, i.e., 1024 or lower and currently not in use by another application.

const express = require('express')
const app = express()
const port = 8080

app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))

You can now save and exit the application.  

This application listens in on the mentioned address, localhost: 8080. It returns ‘Hello World’ along with a 200 HTTP success code.

Checking the Endpoint

Let’s make a change that will enable you to do a test run of your application. Mark ‘app.js’ as executable:

$ chmod +x ./app.js

And then run it as below:

$  node app.js


Server running at http://localhost:8080/

It is important to remember that a Node.js application executed using this method will block further commands until the application is terminated. You can do this using Ctrl-C.

To test your application, you need to open a second terminal session on the server and then connect it to localhost using ‘curl’ -

$ curl http://localhost:8080

If the output is as mentioned below, the application is working as intended and is listening on the correct port and address.

Hello World

In case this is not the output that you see, ensure that the Node.js application is running and set up to listen to the right port and address. Once confirmed, you can kill the application using Ctrl+C.

Install PM2

At the next stage, we will install PM2. PM2 is a process manager for Node.js applications. It presents a simple way to administer and daemonize applications to run them as a service in the background as needed.

The following command installs PM2:

$ sudo npm install -g pm2

‘-g’ informs ‘npm’ to set up the module globally, to allow it to be available system-wide.

Manage Application with PM2

PM2 is relatively straightforward and simple to use. The steps below cover a few of the basic uses of PM2. 

To begin, you need to use the ‘pm2 start’ command to execute the app.js’ application in the background –

$ pm2 start app.js

In addition, this command adds the application to PM2’s process list. This is outputted each time the application starts.


[PM2] Spawning PM2 daemon
[PM2] PM2 Successfully daemonized
[PM2] Starting app.js in fork_mode (1 instance)
[PM2] Done.

│ App name │ id │ mode │ pid  │ status │ restart │ uptime │ memory │ watching │

│ app      │ 0  │ fork │ 3524 │ online │ 0       │ 0s     │ 21.566 MB │ disabled │

`pm2 show <id|name>` can be used to get additional details about an app   

PM2 assigns an App name automatically based on the name on the filename, but minus the ‘.js’ extension. It also assigns a PM2 id. PM2 collects and displays additional information including the process PID, memory usage and its current status. You can check for logs from PM2 by running:

$ pm2 logs

Furthermore, if you need to backup the logs to an object storage for easy access, you can do that too. Here is a NPM module on GitHub that allows you to create backup of your logs. Easy access to logs help you keep track of the health status of your application.  

Applications that run under PM2 are automatically restarted when the application is killed or if it crashes. Some additional configuration is needed to allow the application to launch when the system is booted or rebooted. Fortunately, PM2 has an easy way to get this done – the subcommand ‘startup’.

The subcommand ‘startup’ generates and configures a script specifically to launch PM2 and its managed process when the server boots.

$ pm2 startup systemd

The final line of the output includes a command that you need to run with superuser privileges as illustrated below – 


[PM2] Init System found: systemd
[PM2] You have to run this command as root. Execute the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u mjm --hp /home/mjm

Execute the command as was generated, similar to the output highlighted above, but replace ‘mjm’ with your username. This sets up PM2 to start when the system boots. Remember to use the command line generated from your output –

$ sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u mjm --hp /home/mjm

This creates a systemd unit that executes ‘pm2’ for the user when the system boots. The ‘pm2’ instance executes app.js’. The status of the systemd unit can be checked using ‘systemctl’

$ systemctl status pm2-mjm

Set Up NGINX as a Reverse Proxy Server

Once your application is running as intended and listening on localhost, you now need to establish a method to grant your users access to it. Towards this end, we now set up the Nginx web server a reverse server. 

Open the following file to edit:

$ sudo nano /etc/nginx/sites-available/default

From the ‘server’ block you should be able to identify an existing ‘location /’ block. You need to substitute the contents of the block with the configuration as indicated below. In case the application is configured to listen to a different port, you would need to update the highlighted area to point to the correct port number.

. . .
    location / {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

This allows the server to reply to requests at its root. If the server is available at ‘’, it can be accessed using the address ‘’ through a web browser, and the request is sent to app.js’ on port 8080 at localhost.

Additional ‘location’ blocks can be added to the same server block. This will allow access to other applications located on the same server. As an example, if another Node.js application is running on port 8081, this location block can be added so that it can be accessed using ‘’ -

/etc/nginx/sites-available/default -- Optional
    location /app2 {
        proxy_pass http://localhost:8081;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

After adding location blocks for your applications, you can save and exit.  

To ensure that no syntax errors were introduced, you can use –

$ sudo nginx -t

You can now restart Nginx using -

$ sudo systemctl restart nginx

As long as your Node.js application is running as expected and the Nginx configurations and application is accurate, you should now be able to access your application using the Nginx reverse proxy without difficulty. You can check it using your server’s URL. 

Create Backups of Your Volume

Since you’re setting up for production, it’s important to backup your volume regularly. The exact backup mechanism will vary based on your exact production environment. If you’re running on the public cloud, you can rely on your Cloud provider’s backup solution. AWS, for instance, lets you create snapshots of your volumes. To automate the whole process, you can write a shell script that creates automated backups of your instance, or use a third-party vendor like N2WS, Veritas etc. which can manage AWS backups for you. These solutions use flexible policies and schedules for automating backups of  EC2 instances, EBS volumes, RDS, DynamoDB, Aurora and Redshift. 

Azure and Google Cloud are no short of backup options either. Azure lets you backup your VM using the portal or using a CLI tool. GCP has built-in tools for automating and handling backups. Regardless of the platform you’re on, remember to create automated backups or set up elastic load balancers to automatically failover when something goes wrong.

What next?

Once an initial version of the server has been set up, you can create a better workflow by automating the whole process. Write a shell script that authenticates with the Ubuntu server using a password or certificate, then uses gulp tasks to organize the production workflow into tasks. Here’s an example of how your code could look like:

var gulp = require('gulp');
var del = require('del');
var push = require('git-push');
var argv = require('minimist')(process.argv.slice(2));

gulp.task('clean', del.bind(null, ['build/*', '!build/.git'], {dot: true}));

gulp.task('build', ['clean'], function() {
  // Build your application (if any asset compilation is required)

gulp.task('deploy', function(cb) {
  var remote = argv.production ?
    {name: 'production', url: '<org>/', branch: 'gh-pages'},
    {name: 'test', url: '<org>/', branch: 'gh-pages'};
  push('./build', remote, cb);

Alternatively, you can simplify the task by using a deployment service like DeployBot.