Logic Apps and the Service Bus Connector - The Case of the Incomplete Message!

The Symptom

The other day I tracked down an issue my team was seeing with an Azure App Service Logic App. The Logic App was being triggered by an Azure Service Bus Connector that was configured to work with a Service Bus Topic. With a message published to the Service Bus Topic, an instance of the Logic App would execute successfully — as expected on the trigger’s configured schedule. But then, unexpectedly, on the next scheduled time period, often a new instance of the Logic App would execute and reprocess the exact same message from the Service Bus Topic. It was as if the Service Bus Topic message was never successfully “completed” and thus removed from the Topic.

What You Need To Understand

In order to understand why the same Service Bus Topic message was being processed twice, this is what you need to know.

  • The Service Bus Connector operates in ReceiveMode.PeekLock, so it is not receiving and immediately deleting the message from the Topic in one atomic call like in ReceiveMode.ReceiveAndDelete. In PeekLock mode, two calls to the Service Bus must be made. The first call is to the Receive method, and the Service Bus puts a lock on the message for a specific lock duration. The default LockDuration for a Queue or Subscription is 1 minute, and currently that cannot be configured in the Service Bus Connector. Then a second call, to the Complete method, is required to mark that message as processed and delete it from the Service Bus.

  • After a call to a Connector’s trigger method returns a result indicating that data is available to be processed and that a Logic App instance should be created and executed, the Logic App trigger mechanism immediately calls the Connector’s trigger method again. This logically is being done in case there is even more data to be processed. This makes sense because if there is more data available to be processed, one typically would want that data being processed as soon as possible, and would not want to wait for the next scheduled trigger time period.

  • Connectors can have their own specified triggerState data passed from one call of their trigger method to subsequent calls of their trigger method.

  • The Service Bus Connector passes the message’s unique LockToken as triggerState from the first successful trigger invocation to its next trigger call that runs immediately after it.

  • When the Service Bus Connector trigger method is executed, one of the first things that it does is to determine if the triggerState data has been provided. If the triggerState data has been provided, the Service Bus Connector calls the Complete method of the message represented by that given triggerState/LockToken. Only after doing that will it attempt to receive the next message from the configured Topic or Queue.

The Cause

Now that we understand how the Complete method is called as part of the Service Bus Connector’s trigger mechanism, it is easy to understand the cause of the problem.

If the second call to the trigger fails (the call that would complete the message), for instance, because of a transient error on the gateway, then the Service Bus message will not be completed — until perhaps the next scheduled trigger.

However, after the default lock duration of 1 minute expires, the lock is lost and Service Bus makes the message available to be received again.

This is exactly what happened — the message was not completed and when the next scheduled trigger executes, the same message is processed again.

Proof — The Smoking Trigger

From the Trigger History of the Logic App, we can see the trigger that failed in the image below.

Logic App Trigger History with Failed Trigger

The call to this trigger returned a 502 “Bad Gateway” error.

1502 - Web server received an invalid response while acting as a gateway or proxy server.

The full JSON of the output message for the failed trigger is:

1{
2    "headers": {
3        "date": "Mon, 07 Sep 2015 23:28:32 GMT",
4        "server": "Microsoft-IIS/8.0"
5    },
6
7    "body": "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>502 - Web server received an invalid response while acting as a gateway or proxy server.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-size:1.7em;margin:0;color:#CC0000;} \r\nh3{font-size:1.2em;margin:10px 0 0 0;color:#000000;} \r\n#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;font-family:\"trebuchet MS\", Verdana, sans-serif;color:#FFF;\r\nbackground-color:#555555;}\r\n#content{margin:0 0 0 2%;position:relative;}\r\n.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}\r\n-->\r\n</style>\r\n</head>\r\n<body>\r\n<div id=\"header\"><h1>Server Error</h1></div>\r\n<div id=\"content\">\r\n <div class=\"content-container\"><fieldset>\r\n  <h2>502 - Web server received an invalid response while acting as a gateway or proxy server.</h2>\r\n  <h3>There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.</h3>\r\n </fieldset></div>\r\n</div>\r\n</body>\r\n</html>\r\n"
8}

The Remedy

This situation should not happen often, but unfortunately it was occurring multiple times per day, and only ever on the second call to the trigger method. The only real solution is to follow Microsoft’s advice and design your systems to be idempotent, because it can and occasionally will happen.

In addition to that, to reduce the frequency of the situation, hopefully Microsoft will release a change in the near future so that failed calls to the trigger method are automatically retried.

How To Configure A Logic App To Use An Internal API App

I have been developing Microsoft Azure App Service functionality over the last few weeks and have stumbled over many hurdles. The Logic Apps development experience is still definitely in ‘preview’.

One issue that especially concerned me was a problem I had with a Logic App when trying to use a custom API App that had its Access Level set to Internal. This setting lives in All settings => Application settings => Access Level, with the possible options being:

  • Public (anonymous);
  • Internal; or
  • Public (authenticated).

After configuring the Logic App to use the custom API App and running the Logic App, the following error message is shown in the Outputs Link of the Logic App run => Action.

1"status": 403,
2"source": "https://[omitted].azurewebsites.net/api/service/invoke/myinternalaccess.apiapp/DoSomething?api-version=2015-01-14",
3"message": "Permissions for service \"MyInternalAccess.ApiApp\" are set to internal but this request was external."

Clearly, the Logic App was behaving like an external caller of the API App.

The way to fix this is by modifying the code of the Logic App. There were two issues in the code.

Issue 1

In the "parameters" node, there is a parameter for a token for your API App.

 1"parameters": {
 2  "/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp/token": {
 3    "type": "String",
 4    "metadata": {
 5      "token": {
 6        "name": "/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp/token"
 7      }
 8    }
 9  },
10...

The code above is missing a "defaultValue" node. In order to fix this issue, add in "defaultValue": "", in after type. The parameter should now look like:

 1"parameters": {
 2  "/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp/token": {
 3    "type": "String",
 4    "defaultValue": "", 
 5    "metadata": {
 6      "token": {
 7        "name": "/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp/token"
 8      }
 9    }
10  },
11...

After saving, exiting the code editor, refreshing, jumping up and down for good luck (given that the user experience is not entirely reliable), going back into the editor of the Logic App, waiting a little for things to load (this can be important!), and switching to the code editor, the defaultValue is then magically populated with a nice long set of characters.

Issue 2

The second issue is that the API App action must have an "authentication" node under the "inputs" node.

The API App action that did not work looked like the following:

 1"actions": {
 2  "myinternalaccess.apiapp": {
 3    "type": "ApiApp",
 4    "inputs": {
 5      "apiVersion": "2015-01-14",
 6      "host": {
 7        "id": "/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp",
 8        "gateway": "https://[omitted].azurewebsites.net"
 9      },
10      "operation": "DoSomething",
11        "parameters": {}
12    },
13    "conditions": []
14  }

The fix is to add the following "authentication" node after the "parameters" node.

1"authentication": {
2  "type": "Raw",
3  "scheme": "Zumo",
4  "parameter": "@parameters('/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp/token')"
5}

The action should now look like:

 1"actions": {
 2  "myinternalaccess.apiapp": {
 3    "type": "ApiApp",
 4    "inputs": {
 5      "apiVersion": "2015-01-14",
 6      "host": {
 7        "id": "/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp",
 8        "gateway": "https://[omitted].azurewebsites.net"
 9      },
10      "operation": "DoSomething",
11      "parameters": {},
12      "authentication": {
13        "type": "Raw",
14        "scheme": "Zumo",
15        "parameter": "@parameters('/subscriptions/[...]/resourcegroups/[...]/providers/Microsoft.AppService/apiapps/myinternalaccess.apiapp/token')"
16      }
17    },
18    "conditions": []
19  }

After saving these changes and doing a good-luck dance, the Logic App will now successfully call the API App.

Sometimes, if you go back into the editor, after it has loaded the Discard button is enabled, and if you switch into the code, you may notice that the authentication node is missing. Simply press the Discard button and the authentication node immediately reappears.

Hopefully Microsoft will fix many of the user experience issues with Logic Apps very soon!

BizTalk Summit 2015 in London - Part 2 - Turning of the Tide?

In the previous article, BizTalk Summit 2015 in London - Part 1 - Microsoft’s Roadmap for Integration, we recapped how App Service is key in Microsoft’s roadmap and strategy for Integration and that Microsoft announced App Service will be available on-premises as part of the Microsoft Azure Pack. Microsoft also confirmed that the BizTalk brand is important and is here to stay.

Microsoft’s vision is to democratise integration (simplify so that developers not require specialist expert skills), be the iPaaS leader for enterprise, and build a rich ecosystem for the community and business partners.

This article shines light on various other jigsaw puzzle pieces that were presented at the BizTalk Summit 2015, to further help you understand the wider picture of where “Modern Integration” appears to be heading.

Enterprise Focus - Connecting with IBM Systems

In line with Microsoft’s vision for the enterprise and integrating with existing line of business systems, Paul Larsen, Principle Program Manager at Microsoft, presented the IBM MQ, DB2 and Informix Connectors. These connectors have been developed to allow Azure solutions to communicate with on-premises IBM systems.

An integration solution connecting to an IBM system could be deployed as a hybrid solution when using Azure in the cloud, or deployed completely on-premises solution using App Service on-premises with the Microsoft Azure Pack.

The MQ Connector currently only supports MQ version 8. In contrast, MQ version 8 is actually not supported by BizTalk Server 2013 R2. There was no information on whether version 8 would be supported in BizTalk Server 2016.

The DB2 Connector communicates to DB2 using the Distributed Relational Database Architecture (DRDA) database interoperability standard that DB2 supports. This connector also has support for custom SQL and the Open Data Protocol (OData). To achieve all of this, Microsoft have developed a new ADO.NET provider for DRDA - which also is utilised by the Informix Connector.

Democratisation Focus - The Durable Task Framework

Dan Rosanova, Senior Program Manager at Microsoft, presented a recently open-sourced library for durable, scalable, reliable, traceable and manageable .NET code-based workflows with eventual consistency.

The Durable Task Framework allows developers to write long-running persistent workflows in code using the async/await capabilities.

Interestingly, this framework originally was released as a preview in June 2013 - apparently without much fan-fare or wide reach in the community, because many people thought it was brand new.

The framework provides automatic persistence and check-pointing of program state, versioning of orchestrations and activities, error handling, compensation, automatic retries, asynchronous timers, and diagnostics.

Microsoft internally uses it to reliably orchestrate long-running provisioning, monitoring and management operations. The orchestrations can scale out horizontally by adding more worker machines.

The concepts of logic, state and the runtime are separated. State management currently happens in Azure Service Bus, optionally with an Azure Storage account. Your custom code is the orchestration logic.

Please refer to the Durable Task Framework Wiki for usage documentation.

One example use case for the use of this framework is the situation where distributed transactions or two-phase commit protocols typically would be required - and where the services being consumed do not support locking or transactions.

In the above scenario, it is common for a developer to create a BizTalk orchestration to robustly orchestrate the calling of those multiple services. Now however, there is another viable, reliable, durable and robust implementation option. Depending on the full scope of requirements, this scenario can be implemented in .NET with the Durable Task Framework and Service Bus - instead of within a BizTalk orchestration.

Imagine that part of your custom solution for a customer might be starting an orchestration task from within a Web App or an API App.

To summarise - robust, durable, traceable and manageable orchestration logic in pure .NET without the need for specialist, expert developer skills.

Democratisation Focus - Logic Apps

Stephen Siciliano, Senior Program Manager at Microsoft, presented an in-depth look at Logic Apps.

Stephen first demonstrated how to develop an application that archives Twitter tweets to a DropBox account.

The barrier of entry for developing Logic Apps is low, which means that specialised expert developer skills (such as those required for BizTalk development) are not required.

Some notable features of Logic Apps are that they are resilient against failure with an “at least once” guarantee, and they can directly call any REST endpoint without the need for an API App. There is also the ability to use parameters to separate configuration from the definitions (a good story for deployments to different environments), the support for large messages (less than 100mb), and support for binary blobs by externalising the state to storage or Base64 encoding it into the JSON message.

Enterprise Focus - Microsoft Azure BizTalk Services (MABS) 1.0 Features in App Service

Back to the enterprise vision, Stephen Siciliano also demonstrated an Enterprise Application Integration scenario with the usage of the BizTalk XML Validator, BizTalk Transform API App and BizTalk XPath Extractor in a Logic App.

Prashant Kumar, Senior Program Manager at Microsoft, discussed how the functionality of the Microsoft Azure BizTalk Services (MABS) 1.0 is available in App Service, including the XML Validator, Transform Service, and the Flat File Encoder etc.

Enterprise Focus - BizTalk B2B in App Service

Prashant Kumar also discussed and demonstrated some of the BizTalk B2B API Apps - TPM (for managing trading partners, agreements and artefacts), AS2, EDIFACT and X12.

Of note, there was discussion about how the features within the BizTalk platform are bit-by-bit being “broken out” into individual apps. One could almost describe that as a re-architecture.

Many more BizTalk features also now are becoming available in App Service, including: OAuth, JSON support, monetisation in the marketplace, pre-baked integration Logic App recipes, long running workflows and the BizTalk Rules API App.

Enterprise Focus - BizTalk Rules API App

The new BizTalk Rules API App can be used to help decouple business logic from application code. It retains many of the familiar concepts from the BizTalk Rules Engine (BRE) - such as Vocabulary, Policies and of course the Rules.

It was interesting and concerning to hear Microsoft talking in terms of enabling business users to make changes to business rules - for instance, on the fly, in production. It would be helpful if Microsoft addressed more real world concerns about a change workflow or lifecycle and the testing of business rule changes before they are deployed into production.

Enterprise Focus - API Management

Sameer Chabungbam, Principal Program Manager at Microsoft, presented how the API Apps are a powerful platform for building and managing API’s - something that is increasingly more important in the enterprise.

“Connector” API Apps let you build functionality to communicate with other systems - similar to the role of an adapter to another system.

Sameer demonstrated how to configure an API App with Swagger and optimise the API App for usage within a Logic App.

Migration of BizTalk Artefacts to App Service

Jon Fancey, Integration MVP, and Dan Probert, introduced a new company, The Migration Factory that in the near future, for a fee, aims to automate the migration of BizTalk artefacts to App Service for cloud deployment or on-premises hosting with the Azure Pack.

Due to technical complexities with some orchestration implementations, all solutions may not be 100% automated, but a very high percentage automated conversion rate is possible.

This is a rather a telling tale about the direction of Modern Integration implementations.

BizTalk Server Is Here to Stay

General consensus though is that BizTalk Server still has a niche in the market and will be around for many more years than some people can imagine. There are many on-premises systems that for various business reasons will never be deployed to the cloud.

BizTalk Server’s features are rich and powerful. As Michael Stephenson, Integration MVP coined it, BizTalk Server is the “Integration Swiss Army Knife” - it can do it all.

Steef-Jan Wiggers, Integration MVP, presented some strong cases for BizTalk Server playing a role in data enrichment and distribution, from deep integration with line of business systems such as SAP to consumer applications. Mission critical applications also are likely to remain on-premises and could be too difficult or not cost effective to redesign.

Further emphasising BizTalk Server’s relevance and interoperability in Modern Integration world, Steef-Jan also demonstrated BizTalk Server 2013 R2’s capability to communicate with REST endpoints and also convert between SOAP and REST payloads with JSON encoding and decoding.

Kent Weare, Integration MVP, also demonstrated how Azure API Management can be used to create a REST API endpoint that acts as a facade to an existing SOAP endpoint on an on-premises BizTalk Server.

Business Activity Monitoring (BAM) in Power BI

Todd Glad Nordahl, Technology Solution Professional at Microsoft, presented the Power BI Designer and Dashboard, and discussed how it can be used on top of BizTalk Server to connect to BizTalk’s Business Activity Monitoring (BAM) data and augment that with other data sources to help make business decisions.

Windows Communication Foundation (WCF)

There was little discussion about the future of WCF, which appears at odds with Modern Integration and the Web and API Apps. WCF seems to be fading into the background and becoming a Legacy Integration technology.

It would be interesting to see an official statement from Microsoft about the future of WCF.

Reflections

Reading between all these lines, the BizTalk platform is being re-architected for the Modern Integration world, into App Service, which can be used in the cloud or on-premises.

The deployment, versioning and dependency issues that plague many BizTalk Server solutions due to its dependency on the Global Assembly Cache (GAC) will finally be resolved by moving to the App Service architecture.

There however are still many unanswered questions and many unknowns.

  1. Within the world of Microservices and new container technologies, there will need to be a brand new set of guidelines and better practices to understand and apply.
  2. Of course, there may be a bunch of new challenges waiting to be discovered and solved, and avoiding deployment issues in a Microservice world is one of them.
  3. Logic Apps have tracking and archiving of content (does anyone have concerns about privacy with this?), and Azure API Management provides analytics. I am not sure how this will translate into equivalent and improved functionality of BAM.

All the pieces on the chess board are being lined up, and Microsoft now appears to have a solid architecture for Modern Integration. We now may be witnessing the start of the turning of the tide.

Architects and enterprises now will have the choice to use App Service functionality (in the cloud or on-premises) in addition to the option of implementing a solution using the equivalent functionality in BizTalk Server.

BizTalk Summit 2015 in London - Part 1 - Microsoft's Roadmap for Integration

I think everyone agrees that the BizTalk Summit 2015 in London was a smashing success. Saravana Kumar’s BizTalk360 team have once again done a fantastic job at uniting the world of Integration. This year saw 330 people from 20 countries, with all the speakers being either Microsoft employees or Integration MVP’s.

The first day primarily was filled with the presentations from Microsoft on their new App Service functionality, and in the second day the Integration MVP’s were allowed to shine.

This first article concentrates on the roadmap and strategy that Microsoft has for Integration.

Important On-Premises Announcements

Before we dive into the roadmap, there were two important announcements for the community in the area of on-premises integration.

Firstly, there will be a new major version of BizTalk Server released in 2016 to align with and provide support for the new releases of Windows platform products - such as the next version of Windows Server, SQL Server and Visual Studio.

Secondly, the App Service functionality will be available on-premises with the next release of the Windows Azure Pack. The release date however was not specified, but from a few discussions I have the impression that it will be months away, not weeks.

Microsoft’s Roadmap and Strategy for Integration

Josh Twist, Principal Program Manager on the Microsoft Azure team, originally was the keynote speaker but unfortunately was unable to attend. The audience nevertheless warmly received the keynote presentation by Karandeep Anand, Partner Director of Program Management at Microsoft.

Karandeep’s keynote articulately explained Microsoft’s roadmap and strategy for integration. He spoke of Microsoft’s journey to the cloud so far and the many learnings they have gained.

Learnings from Azure Websites

Azure Websites (now Web Apps) is by far, significantly the largest service currently used in Azure. The explanation, according to Karandeep, is the simple, low-complexity barrier of entry, with rich features and tooling, automatic load balancing, scaling and geo-redundancy.

The identified gaps with Websites however are a lack of integration with business logic, rules, triggers or workflow.

Learnings from BizTalk Services

There were many learnings from Microsoft’s BizTalk Services offering as well. Importantly for many customers, BizTalk as a brand name has been recognised officially by Microsoft as being important. Karandeep confirmed that the BizTalk name is here to stay. The BizTalk Services offering also validated various cloud design patterns, and hybrid connectivity was identified as critical and one of their differentiators in the market.

The identified gaps with BizTalk Services include the need for more out-of-the-box sources and destinations, pipeline templates, custom code support, long-running workflows and parallel execution.

All in all, Microsoft identified the need to significantly invest in this space to approach the same value and functionality as BizTalk Server.

Learnings from BizTalk Server

The learnings from BizTalk Server also were quite interesting. It was no surprise however when Karandeep mentioned that there is a high-complexity barrier of entry into the world of BizTalk. This unfortunately encourages a proliferation of the “hack zone” where developers hack together applications in an effort to develop a quick solution, but these applications or scripts don’t necessarily have the robustness, scalability or desired level of support and maintainability.

Microsoft’s Vision

Microsoft’s subsequent vision that addresses all of these learnings is three-fold.

  1. Firstly, Microsoft wants to fill the gap between the high-complexity barrier of entry of BizTalk Server and the low-complexity barrier of entry of Web Apps (Azure Websites). The idea is to democratise integration - by making it simple, easy, and approachable by the masses of developers, not just specialised experts.
  2. Next, Microsoft importantly will complement this ease of use with a heavy focus on the enterprise, aiming to be the Integration Platform-as-a-Service (iPaaS) leader, providing 24/7, robust, resilient services with all the solid functionality of BizTalk Server.
  3. Lastly, Microsoft realise that their offering should provide an extensible foundation that the community and partners can enrich, by creating a public marketplace that supports plugins and monetisation.

Microsoft’s Roadmap

Microsoft has a holistic roadmap that aims to:

  • Empower business employees with insights (analytics and statistics) to help make solid decisions,
  • Enable the transformation of businesses with the agility to develop solutions faster; and
  • Enable businesses to engage their customers by connecting with any device at Internet scale. Hence, the App Service cometh.

The App Service

App Service is the resulting integrated offering from Microsoft that enables the development of rich, engaging and intelligent applications that scale as your business grows.

For more information about App Service, please see http://azure.microsoft.com/en-us/services/app-service/.


Continued in Part 2 » BizTalk Summit 2015 in London - Part 2 - Turning of the Tide?

A Comparison of Website Hosting Solutions

When I recently started building ChannelAdam and this Software Development Zone, I considered many different types of website technologies including various blog engines, content management systems and programming languages.

However, I couldn’t decide which technology I should use until I had a better idea of how I would host the sites. As a polyglot programmer, the language and technology was the least of my concerns. I had 4 overarching principles that would guide the hosting and development of this site.

My Principles / Technology Selection Criteria

The principles I wanted to follow, and thus the selection criteria for hosting were:

  1. Low cost. Since this is a hobby, I want to keep the cost of hosting this operation very low or even better, free.
  2. Branding. I must have a custom domain.
  3. Trustworthiness and security. I must have SSL. I consider it to be my duty to you, in good conscience. When you visit my website, I want you to be assured that the content you see was not tampered with between my server and your browser (i.e. no Man-in-the-Middle attacks - especially with so many public, unsecured Wi-Fi access points these days. For this reason, I agree with the SSL/HTTPS Everywhere movement, and I think that all websites should be HTTPS, even if they don’t specifically have any sensitive content. Let’s keep the web safer!
  4. Search Engine Optimisation. I must have SEO and page redirection capability for changing the structure of the site in the future.
  5. Stability, scalability and availability. The host must be stable and have high availability.

Tell Me the Results Already!

After researching the various hosts and platform providers, I was a little disappointed. Hosting a website, with a custom domain and SSL is not commonly done cheaply.

Having said that, RedHat OpenShift won outright - they are the ONLY host offering free hosting, with a custom domain and SSL, and scaling up to 3 ‘gears’ for free. In order to get the SSL however, you apparently need to register and upgrade to the zero dollar Bronze plan and enter your credit card details. The only problem is that they do not yet accept billing addresses for people in Australia - so sadly I cannot yet use OpenShift.

The next best host I found, after some positive word of mouth, was Digital Ocean. $5 a month for a Linux box under my complete control, with a custom domain and SSL.

At the moment, Digital Ocean has my business…

The Long Version of the Results

The Plan to Reduce Costs

I quickly determined that the easiest way to reduce costs is to have a static website only. This reduces cost because the size of the server can remain small and still perform under high load.

In addition, it is possible to use Content Delivery Networks (CDNs) to serve your static files. The problem with CDNs however, is that the free ones I found do not support SSL.

After researching the current state of Static Site Generators (SSGs), I was sufficiently convinced that I could develop the majority of Channel Adam as a static website with the help of a SSG.

The cost of SSL certificates was mitigated with StartSSL - where anyone can get a free SSL certificate (for their non-commercial website).

The Comparison

Below are some of the hosts that I researched. Not every cell in the table is filled out, either because the host does not offer the service or I quickly lost interest in them due to other criteria not being met…

Host Free Tier Expansion Technologies Custom Domain Support SSL Support Redirects
Heroku 1x Dyno (512 mb RAM, 1x CPU Share)

Dev/Hobby Postgres Database with up to 10k rows (and up to 4 hours downtime per month)

Standard support 1+ Day response times
2x Dynos (512 mb RAM, 1x CPU Share)

$0.05c/dyno-hour ($34.50/month)

Basic 10M rows $9/month

Premium Support 24/7, 1-hour response SLA, $1000/month
Node.js, Java, Ruby, Python

See Heroko Addons
Free

See Custom Domains
$20/month for a SSL Endpoint

Available in US and Europe regions

See: SSL Endpoint
SSL
dotCloud None Static site, 1 instance, 32mb RAM

$0.06/hour ($4.32/month)

Postgres $8.64/month
PHP, Node.js, Python, Ruby, Perl, … Yes, unlimited SSL Load Balancer $21.60/month

See: SSL
AppFog None Basic $20/month

Unlimited apps within 2GB RAM. Up to 8 service instances.

200mb storage per MySQL or Postgres instance

10GB total data transfer

40 requests per second per app instance
Java, Node.js, Ruby, Python, PHP Unlimited 2 dedicated SSL endpoints (on AWS) for the Developer $50/month plan or better.
Amazon S3 1 year only Free Usage Tier.

5 GB of Amazon S3 standard storage, 20000 Get Requests, 2000 Put Requests, and 15GB of data transfer out each month.
First 1 TB / month $0.0330/GB, plus data transfer fees. Files

See:
Website Hosting

Gotcha using virtual host

How to server S3 over https with CloudFront
Yes - Amazon S3 with Amazon Route 53 DNS

Custom domain walkthrough

Custom SSL domain names
Yes - with Amazon CloudFront

Custom SSL

SNI Custom SSL - no upfront or monthly fees for certificate management; simply pay normal Amazon CloudFront rates for data transfer and HTTPS requests.
e.g. $0.0125 per 10K HTTPS requests plus data transfer of $0.190 for the first 10TB per month.

Dedicated-IP - $600 for each custom SSL certificate you associate with your CloudFront distributions, pro-rated by the hour.

How to serve S3 over https with CloudFront
Gondor None Shared Hosting. A slot is 1 Python WSGI process or 2.5 GB PostgreSQL database or 64 MB Redis cache or 1 Celery process
$10/month
Python Yes Yes. Dedicated-IP SSL
OpenShift 1-3 Gears

Custom domain

Shared SSL certificate for rhcloud.com

SNI SSL for custom domain

512mb RAM

1GB storage per gear

As an example, this is suitable for a Drupal-based app with:
15 pages/second
Hundreds of articles
~50k visitors per month
Small gears

$0.02/hour

Medium gear

$0.05/hour

(recommended for Java)
PHP, Perl, Python, Ruby, Node.js, Java Yes for $0/month Yes for $0/month Yes, depending on the technology used.
Google App Engine 28 instance hours
5GB cloud storage
1GB/app per day incoming data
1GB/app per day outgoing data
See: Quotas
Static resources are placed in the cache and are therefore not part of instance hours
$0.05 / instance / hour
Cloud storage $0.026 / GB / month
Incoming data free
Outgoing data $0.12 / GB
See: App Engine
Python, Java, PHP, Go

See:
Static files in Python

Host on Google Server

Implementing a static website in Google App Engine

Git push to deploy
Yes.

Domain
SNI SSL certificate slots are offered for no additional charge for accounts that have billing activated.

See:
Pricing
SSL

Virtual IP - $39/month
NOT FOR STATIC CONTENT
Google Compute Engine 1 core, 3.75 GB RAM, $0.07/hour Node.js
Microsoft Azure Websites Up to 10 free websites, 1GB storage, *.azurewebsites.net subdomain and SSL Shared preview price: $0.013/hour (~$10/month)

Basic Plan: Small instance, 1 CPU core, 1.75MB RAM, $0.075/hour, (~$56/month)

Standard Plan: Small instance, $0.10/hour, (~75/month)
.NET, PHP, Node.js, Python, Java Not on Free plan. On Shared plan and above. Not on Free or Shared plans.

On Basic plan: SNI SSL $9/month, IP SSL $39/month

On Standard plan: 5 SNI SSL and 1 IP SSL included.
Microsoft Azure CDN

Storage block blob with a CDN endpoint and custom domain
Locally redundant storage: $0.069 per GB for the first TB
Geo Redundant, Read Only Geo Redundant

$0.005 per 100,000 transactions across all Storage types

See: Pricing
See:
How to use CDN

How to enable CDN

How to map CDN content to a custom domain

Configure custom domain name in Azure Storage
Yes Yes, but NOT on custom domain.

See: SSL cert for custom domain endpoint on CDN
GitHub Pages 1GB per repository, 100mb per file Html, Markdown, Jekyll

See: Using Jekyll
Yes

See: My custom domain isn’t working
No
BitBucket

Static website hosting
Digital Ocean None 512mb RAM, 1 Core processor, 20GB SSD Disk, 1TB Transfer, $0.015/hour ($5 / month)
Websites hosted on Google Drive
Modulus.io $0.02/hour, 296MB RAM + 512MB Swap

Conclusion

It is possible to run a website cheaply, with a custom domain and a free SSL certificate - you just have to do the research in order to find the best deal.

For me, RedHat OpenShift and Digital Ocean are leaders in their field.