<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Simon Frey</title>
    <link>https://simonfrey.writeas.com/</link>
    <description>Writing about tec, podcasting and other thoughts that come to my mind.</description>
    <pubDate>Sun, 05 Apr 2026 13:59:10 +0000</pubDate>
    <item>
      <title>golang gopher benchmark wednesday</title>
      <link>https://simonfrey.writeas.com/known-length-slice-initialization-speed-golang-benchmark-wednesday?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[golang gopher benchmark wednesday&#xA;&#xA;Known length slice initialization speed - Golang Benchmark Wednesday&#xA;&#xA;I stumbled over the hint, that it is better for performance if you initialize your slices with a dedicated length and capacity if you know it. Sounds as it would make sense, but I wouldn&#39;t be me if I just accept that without testing that hypothesis.&#xA;&#xA;An example that I am using in real life is for creating a slice of ids for querying a database later on with that ids. Iterating over the original data structure (in my case a &#39;map[string]SimonsStruct{Id int, MORE FIELDS}&#39;) and copying the ids out.&#xA;&#xA;Normally I used &#39;make([]int,0)&#39; (len == 0 &amp; cap == 0), so let&#39;s see if that would be faster with initializing the slice directly with it the right capacity and length.&#xA;&#xA;Keep in mind the tests only work if you know the size of the target slice upfront. If not, sadly this Benchmark Tuesday will not help you.&#xA;&#xA;Benchmark Code&#xA;&#xA;Bad: Initialize slice empty even if you know the target size&#xA;const size int //See size values benchmarked later in table&#xA;func BenchmarkEmptyInit(b testing.B) {&#xA;&#x9;for n := 0; n &lt; b.N; n++ {&#xA;&#x9;&#x9;data := make([]int,0)&#xA;&#x9;&#x9;for k:=0;k&lt;size;k++{&#xA;&#x9;&#x9;&#x9;data = append(data,k)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;&#xA;Best for big size: Initialize slice with known capacity and add data with append&#xA;const size int //See size values benchmarked later in table&#xA;func BenchmarkKnownAppend(b testing.B) {&#xA;&#x9;for n := 0; n &lt; b.N; n++ {&#xA;&#x9;&#x9;data := make([]int,0,size)&#xA;&#x9;&#x9;for k:=0;k&lt;size;k++{&#xA;&#x9;&#x9;&#x9;data = append(data,k)&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;&#xA;Best for small &amp; medium size: Initialize slice with known capacity &amp; length and add data with direct access&#xA;const size int //See size values benchmarked later in table&#xA;func BenchmarkKnownDirectAccess(b testing.B) {&#xA;&#x9;for n := 0; n &lt; b.N; n++ {&#xA;&#x9;&#x9;data := make([]int,size,size)&#xA;&#x9;&#x9;for k:=0;k&lt;size;k++{&#xA;&#x9;&#x9;&#x9;data[k] = k&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;&#xA;Results&#xA;&#xA;The table shows the time it took for every example to init all its elements (only measured inside the benchmark loop)*&#xA;Have an eye on the unit! (ns/ms/s)&#xA;&#xA;| #Elements (size) | EmptyInit | KnownAppend | KnownDirectAccess |&#xA;|------------------|-----------|-------------|-------------------|&#xA;| 1                | 31.00 ns  | 1.52 ns     | 0.72 ns           |&#xA;| 100              | 852 ns    | 81.4 ns     | 59.1 ns           |&#xA;| 100 000          | 1.11 ms   | 0.22 ms     | 0.20 ms           |&#xA;| 1 000 000        | 10.76 ms  | 3.13 ms     | 3.14 ms           |&#xA;| 100 000 000      | 2.48 s    | 0.21 s      | 0.22 s            |&#xA;| 300 000 000      | 6.79 s    | 0.90 s      | 0.95 s            |&#xA;&#xA;Interpretation&#xA;&#xA;That initializing the slice with len &amp; capacity 0 would be the worst was obvious, but I am still surprised that the append approach outperforms the direct access for bigger sizes.&#xA;&#xA;But after tinkering about it total makes sense. The direct access approach needs to write every entry twice:&#xA;&#xA;1) Initializing the whole array with its &#39;nil&#39; value (in our case int with &#39;0&#39;)&#xA;2) Writing the actual value into that slice&#xA;&#xA;Step 1) is not needed with the append approach, as we just reserve a memory location but the previous values stay there until we write them in step 2. For bigger slices this setup overhead outweighs the performance benefit of direct access.&#xA;This will be even more significant if the values in the slice are not only simple int but even bigger (e.g. struct with a lot of fields), as then the setup will have to initialize even more &#39;nil&#39; values.&#xA;&#xA;Conclusion&#xA;&#xA;The hint I found only was right: If you know the size of your target slice always initialize it with that size as capacity. For medium &amp; small size slices use the direct access approach. For very big slices use append.&#xA;&#xA;Thanks for reading and see you next week!&#xA;&#xA;You got any feedback? Would love to answer it on HackerNews&#xA;&#xA;p.S. There is a RSS Feed&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://simon-frey.eu/static/benchmark_tuesday.png" alt="golang gopher benchmark wednesday"/></p>

<h1 id="known-length-slice-initialization-speed-golang-benchmark-wednesday" id="known-length-slice-initialization-speed-golang-benchmark-wednesday">Known length slice initialization speed – Golang Benchmark Wednesday</h1>

<p>I stumbled over the hint, that it is better for performance if you initialize your slices with a dedicated length and capacity if you know it. Sounds as it would make sense, but I wouldn&#39;t be me if I just accept that without testing that hypothesis.</p>

<p>An example that I am using in real life is for creating a slice of ids for querying a database later on with that ids. Iterating over the original data structure (in my case a &#39;map[string]SimonsStruct{Id int, MORE FIELDS}&#39;) and copying the ids out.</p>

<p>Normally I used &#39;make([]int,0)&#39; (len == 0 &amp; cap == 0), so let&#39;s see if that would be faster with initializing the slice directly with it the right capacity and length.</p>

<p><strong>Keep in mind the tests only work if you know the size of the target slice upfront. If not, sadly this Benchmark Tuesday will not help you.</strong></p>

<h2 id="benchmark-code" id="benchmark-code">Benchmark Code</h2>

<p><strong>Bad: Initialize slice empty even if you know the target size</strong></p>

<pre><code class="language-go">const size int //See size values benchmarked later in table
func BenchmarkEmptyInit(b *testing.B) {
	for n := 0; n &lt; b.N; n++ {
		data := make([]int,0)
		for k:=0;k&lt;size;k++{
			data = append(data,k)
		}
	}
}
</code></pre>

<p><strong>Best for big size: Initialize slice with known capacity and add data with append</strong></p>

<pre><code class="language-go">const size int //See size values benchmarked later in table
func BenchmarkKnownAppend(b *testing.B) {
	for n := 0; n &lt; b.N; n++ {
		data := make([]int,0,size)
		for k:=0;k&lt;size;k++{
			data = append(data,k)
		}
	}
}
</code></pre>

<p><strong>Best for small &amp; medium size: Initialize slice with known capacity &amp; length and add data with direct access</strong></p>

<pre><code class="language-go">const size int //See size values benchmarked later in table
func BenchmarkKnownDirectAccess(b *testing.B) {
	for n := 0; n &lt; b.N; n++ {
		data := make([]int,size,size)
		for k:=0;k&lt;size;k++{
			data[k] = k
		}
	}
}
</code></pre>

<h2 id="results" id="results">Results</h2>

<p>The table shows the time it took for every example to init all its elements <em>(only measured inside the benchmark loop)</em>
<strong>Have an eye on the unit! (ns/ms/s)</strong></p>

<table>
<thead>
<tr>
<th><a href="https://simonfrey.writeas.com/tag:Elements" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Elements</span></a> (size)</th>
<th>EmptyInit</th>
<th>KnownAppend</th>
<th>KnownDirectAccess</th>
</tr>
</thead>

<tbody>
<tr>
<td>1</td>
<td>31.00 ns</td>
<td>1.52 ns</td>
<td>0.72 ns</td>
</tr>

<tr>
<td>100</td>
<td>852 ns</td>
<td>81.4 ns</td>
<td>59.1 ns</td>
</tr>

<tr>
<td>100 000</td>
<td>1.11 ms</td>
<td>0.22 ms</td>
<td>0.20 ms</td>
</tr>

<tr>
<td>1 000 000</td>
<td>10.76 ms</td>
<td>3.13 ms</td>
<td>3.14 ms</td>
</tr>

<tr>
<td>100 000 000</td>
<td>2.48 s</td>
<td>0.21 s</td>
<td>0.22 s</td>
</tr>

<tr>
<td>300 000 000</td>
<td>6.79 s</td>
<td>0.90 s</td>
<td>0.95 s</td>
</tr>
</tbody>
</table>

<h2 id="interpretation" id="interpretation">Interpretation</h2>

<p>That initializing the slice with len &amp; capacity 0 would be the worst was obvious, but I am still surprised that the append approach outperforms the direct access for bigger sizes.</p>

<p>But after tinkering about it total makes sense. The direct access approach needs to write every entry twice:</p>

<p>1) Initializing the whole array with its &#39;nil&#39; value (in our case int with &#39;0&#39;)
2) Writing the actual value into that slice</p>

<p>Step 1) is not needed with the append approach, as we just reserve a memory location but the previous values stay there until we write them in step 2. For bigger slices this setup overhead outweighs the performance benefit of direct access.
This will be even more significant if the values in the slice are not only simple int but even bigger (e.g. struct with a lot of fields), as then the setup will have to initialize even more &#39;nil&#39; values.</p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>The hint I found only was right: If you know the size of your target slice <strong>always</strong> initialize it with that size as capacity. For medium &amp; small size slices use the direct access approach. For very big slices use append.</p>

<p>Thanks for reading and see you next week!</p>

<p>You got any feedback? Would love to answer it on <a href="https://news.ycombinator.com/item?id=20632558" rel="nofollow">HackerNews</a></p>

<p>p.S. There is a <a href="https://blog.simon-frey.eu/feed/" rel="nofollow">RSS Feed</a></p>
]]></content:encoded>
      <guid>https://simonfrey.writeas.com/known-length-slice-initialization-speed-golang-benchmark-wednesday</guid>
      <pubDate>Wed, 07 Aug 2019 07:08:11 +0000</pubDate>
    </item>
    <item>
      <title>Working with Microsoft Azure for 20 hours and why I will not use it again</title>
      <link>https://simonfrey.writeas.com/working-with-microsoft-azure-for-20-hours-and-why-i-will-not-use-it-again?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Microsoft Azure Anger&#xA;&#xA;Last weekend I attended a Hackathon at Microsoft. Overall it was an awesome experience and I had a lot of fun, so this post has nothing to do with the event itself and neither does it reflect my overall opinion on Microsoft. They do awesome stuff in a lot of fields, but with Azure, they are definitely underdelivering.&#xA;&#xA;During the event, I started to get in contact with the Azure platform. Our project idea was to create a website where you can search for news and then via sentiment analysis this news would be sorted by &#34;happiness&#34;. The news search and sentiment analysis are offered via Azures so-called cognitive services that abstract the ML models away and you let you simply use an API for accessing those services....so far so good. With this premise most of you coders out there will have the thought: &#34;This sounds too easy to fill 24h of programming&#34;. Exactly what I thought...and was already thinking about also coding an Alexa skill and so on to fill the time. With two experienced developers, we thought the backend would be done in about 4h (conservative calculation) as it would only be stitching together three APIs and delivering that info to a JSON REST API for our frontend team. For keeping the fun up and having more learnings during the project we decided to do the backend as a serverless function. But then Azure got into our way...&#xA;&#xA;In the end, it took us ~9h to develop the backend as a serverless function consisting of mainly of a 40 line JavaScript file we had to develop in the in-browser &#34;editor&#34; that Azure offers as all the other approaches we tried didn&#39;t work out and we ended up abandoning them. Once again: 9 hours for 40 lines of JS code stitching together three APIs...that is insane. (Btw at 3 am we decided to switch to GCP (google cloud platform) and that did the job in about 45 minutes)&#xA;&#xA;So for sure we did things wrong and it could have been done faster, but this blog post is about the hard onboarding and overall bad structure of Azure. Please also keep in mind that Azure is still in a more-or-less early stage and not all of it is broken. In the following, I will walk you through the timeline of this disaster and suggestions I would have in mind to fix some of the most confusing steps. Actually, I will try to avoid these mistakes in my own future projects, so thanks Microsoft by showing me a way how not to do things xD&#xA;&#xA;Just a bit more background: My partner in the backend had some experience with GCP and I do most of my current projects with AWS, so we did know how things work there...couldn&#39;t be too hard to transfer that knowledge to the Azure platform.&#xA;&#xA;Start of the project&#xA;&#xA;So first of all creating a new Azure account, that is not that hard and after entering credit card info you get 100$ of free credit. I actually like how Microsoft solved that here: You have two plans. You start with the 100$ free tier and if you spend all of that money you manually have to change to the pay-as-you-go plan. So that protects you of opening up an account, doing some testing, forgetting about it and then a month later you get a huge bill (happened to me with AWS). So that is nice for protecting new users that just start to test the system. Good job here Microsoft!&#xA;&#xA;After setting up the account I created a new project and added some of the resources we needed. Creating a serverless function I recognized the tag &#34;(Preview)&#34; on the function I created but didn&#39;t think more about it...but actually, that sign should be something like Experimental/Do not use/Will most likely not work properly. We created a Python serverless function (apparently Python functions are still beta there) and tried to get some code in there. &#xA;&#xA;There are three ways to get code into an azure function: &#xA;&#xA;Web &#34;editor&#34;&#xA;Azure CLI&#xA;VS Code&#xA;&#xA;...for full-featured functions. As we selected the experimental/beta/preview functionality Python we only had the latter two options. Not that bad as it is the same for AWS and I am used to deploying my code via the AWS cmd...shouldn&#39;t be way harder with Azure.&#xA;&#xA;My suggesting: Do not do publish functionality that is obviously not ready yet. Do internal testing instead of using your users for that task.&#xA;&#xA;Azure plugins for VS code&#xA;&#xA;Microsoft overs a wide range of VS code plugins for Azure. As that is my main editor anyways I wanted to give them a try. So for the functionality of serverless functions, you need the functions plugin and about 9 other mandatory ones that are some sort of base plugins. 50MB and three VS Code crashes later the required plugins were finally installed properly. The recommended login method did not work and I had to choose the method of authenticating via the browser instead. Not that big of a deal, but as they recommend the inline method one would think that should work. (Didn&#39;t work for the other folks in my team either...so it had nothing to do with my particular machine)&#xA;&#xA;You would think that 500MB should be enough for finally being able to deploy some code...but you still need 200MB more for the Azure cli that is required for the plugins to work properly.&#xA;&#xA;Finally having installed all of it you can see all your Azure functions and resources in VS code. I started to get a bit excited as it looked like from now on the development would be straight forward and easier as I am used to from AWS.&#xA;&#xA;But that 700mb of code did not work properly....the most important function &#34;deploy&#34; failed without any detailed error message...AAAAAAARRRG. Why do I have to install all that crap and then it can&#39;t do the most basic task it has to do: get my code into their cloud. &#xA;&#xA;Keep your tooling modular and try to do fewer things, but do them right&#xA;&#xA;Code templates&#xA;&#xA;A nice idea is that on creating a new serverless function Azure greets you with a basic boilerplate code example showing you how to handle the basic data interfaces.&#xA;&#xA;It might have been because we selected the alpha functionality &#34;Python&#34;, that we didn&#39;t actually get Python code here but JavaScript. So your function is prepopulated with code that is not able to run because it is the wrong programming language. We were lucky and recognized that right away, but you could get really confusing error messages here if you then start developing in JS but actually having a Python runtime.&#xA;&#xA;Better no boilerplate code than one in the wrong programming language&#xA;&#xA;But at least it is colorful&#xA;&#xA;So next try with the Azure CLI. The first thing that you recognize is that the CLI has all sorts of different colors...but that does not help if you are annoyed and want to get things done.&#xA;&#xA;That is a thing you also see in the Azure web interface...it has got quite a few UX issues but they do have over five color themes that you can choose from for styling the UI...Microsoft I&#39;m not sure if you set your priorities right here ;)&#xA;&#xA;Also, the CLI did not get us where we wanted....either due to our own incompetence or due to the CLI itself, no clue. Either way, I would blame Azure as it is their job to help developers onboard and at least get basic tasks (we still only want to deploy a simple &#34;hello world&#34;) done in an acceptable time.&#xA;&#xA;Focus less on making your UI shine in every color of the rainbow and try to improve documentation and onboarding examples&#xA;&#xA;Full ownership of a resource still does not give you full privileges&#xA;&#xA;After finally being able to deploy at least the &#34;hello world&#34; we wanted to go a step further...work concurrently on that project. Yes until now we mainly did pair programming on a single machine.&#xA;&#xA;As I was the owner of that resource I also wanted to give my teammate full access to it, so that he could work on the resource and add functions if required. I granted him &#34;owner&#34; access rights (the highest that were available) but he was still not able to work properly with that function. In the web UI it did work more or less but than again in VS code there&#39;s no chance to do anything (adding a function or deploying it). I ended up doing something that goes against everything I learned about security: I logged in with my credentials on his machine.&#xA;&#xA;So imagine yourself now already sitting in front of your laptop for about 4 1/2 hours and you did not manage to do any of the actual work you set out to do.&#xA;&#xA;Ditching Azure Functions and switching to GCP&#xA;&#xA;That was the moment when we ditched the idea of doing the backend as an Azure function. We switched to GCP where we started all over again. As I&#39;ve never worked with that platform either I expected a similar hard start, as I already had in the last few hours with Azure. But then about 25 minutes later we achieved more on GCP than with Azure until then. &#xA;&#xA;Something both Azure and GCP do better than AWS is that they have the logs of a serverless function in the same window as the function itself. AWS has a different approach here and you have to change to the cloud logs when you want to get info about your function and how it worked. Props to both Google and Microsoft for solving this a lot better!&#xA;&#xA;Actually a hint for AWS: Give your user all controls and info at a single place&#xA;&#xA;Cognitive services&#xA;&#xA;The prices you could win at the Hackathon were attached to using Azure and thereby we stuck to the cognitive services for doing the news search and the sentiment analysis. Overall the API is straight forward: Send your data and get the results back. &#xA;&#xA;One thing we got told in a presentation and that you should keep in mind when using the cognitive services: You do not control the model and it could change at any moment in time. So if you use the cognitive services for productive use, you should continuously check that the API didn&#39;t change its behavior in a way that influences your product in a bad way. But most of the time it is still a lot cheaper and better than building the model yourself&#xA;&#xA;The problem that we did have with the services were again authentication issues. Quite confusing some of the cognitive services (e.g. the sentiment analysis) have different API base URLs depending on where you register that cognitive service and others do not. As I assume they need that manual setting of data centers for a particular (unknown to me) reason. Indeed I would propose to have all the cognitive services bound to a location.&#xA;&#xA;The news search, for example, is not bound to a location and so we had two different behaviors of the API base URLs in our so short and easy application:&#xA;&#xA;One URL for all locations.&#xA;Only a certain location is valid for your resource. If you point to a wrong API location you get an &#34;unauthorized&#34; as the response&#xA;&#xA;Pointing to the wrong location is pure incompetence on the developer side but it would help a lot if there would be a distinct error code/message for that scenario.&#xA;&#xA;Have the same base URL behavior for all cognitive services&#xA;&#xA;Return some sort of &#39;wrong location&#39;-error if you have a valid API token but you are pointing to the wrong location&#xA;&#xA;Insufficiently documented SDKs&#xA;&#xA;Azure offers SDKs for using their services. We gave the JS SDK for the cognitive services a try. Here we had both ups and downs: First, props to the developers coding the SDKs, as they are straight forward and do what they should. Even the code itself looks good...but why the hell do I have to look into the code of the SDKs to get all the options the functions offer? When you stick to the documentation provided via the GitHub readme or NPM you only get a fraction of the functionality. We were confused that Microsoft&#39;s own SKDs seemed not to be API complete. Looking into the code we saw they are actually API complete and do offer a lot more options than documented.&#xA;&#xA;Please Microsoft: Properly document your functionalities!&#xA;&#xA;IMO there must be deep problems with the internal release processes at Azure. It is not acceptable that an IT company that&#39;s been in the industry for so long allows itself such a basic mistake. You should not release your products (and I see the SDKs as such) without proper documentation.&#xA;&#xA;&#34;Code Examples&#34;&#xA;&#xA;During our trial and error period of trying to get the JS SDK running, we stumbled upon the quickstart guide for the cognitive services Quickstart: Analyze a remote image using the REST API with Node.js in Computer Vision&#xA;&#xA;Instead of using their own SDK and explaining how to use it they show you how to manually build an HTTP request in JS. Sure that can be helpful for new JS coders, but if you have an SDK for that particular reason...why are you not using it? Looks like the left hand is not knowing what the right-hand does.&#xA;&#xA;Stick to one way of doing things. If you have an SKD, also use it in your quickstart guides for being consistent&#xA;&#xA;Conclusion&#xA;&#xA;In the end, we did port the code back from GCP to an Azure function (again ~1h of work). We selected JS instead of Python and coded completely in the web UI...that did work. I now know how real Microsoft business developers do their daily business...never leave the web UI and just accept that life is hard.&#xA;&#xA;Microsoft failed to deliver a adequate experience here and lost me as a potential customer. How can it be that I was able to do the same things in a fraction of the time in GCP? (And keep in mind: it was already 3am in the morning, I was super tired and I also never worked with GCP before)&#xA;&#xA;None of the three major players are perfect and sure I understand it is hard to deliver fast and keeping good quality in this highly competitive market. But maybe actually going the step further will help to win in the end.&#xA;&#xA;Once again: This is me only rating the onboarding experience of Azure in particular! No general opinion on Microsoft.&#xA;&#xA;Last one: The Azure web UI didn&#39;t work in Chrome. So if you have issues with that, Firefox did the trick for us ;)]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://simon-frey.eu/static/ms_azure.jpg" alt="Microsoft Azure Anger"/></p>

<p>Last weekend I attended a Hackathon at Microsoft. Overall it was an awesome experience and I had a lot of fun, so this post has nothing to do with the event itself and neither does it reflect my overall opinion on Microsoft. They do awesome stuff in a lot of fields, but with Azure, they are definitely underdelivering.</p>

<p>During the event, I started to get in contact with the Azure platform. Our project idea was to create a website where you can search for news and then via sentiment analysis this news would be sorted by “happiness”. The news search and sentiment analysis are offered via Azures so-called cognitive services that abstract the ML models away and you let you simply use an API for accessing those services....so far so good. With this premise most of you coders out there will have the thought: “This sounds too easy to fill 24h of programming”. Exactly what I thought...and was already thinking about also coding an Alexa skill and so on to fill the time. With two experienced developers, we thought the backend would be done in about 4h (conservative calculation) as it would only be stitching together three APIs and delivering that info to a JSON REST API for our frontend team. For keeping the fun up and having more learnings during the project we decided to do the backend as a serverless function. But then Azure got into our way...</p>

<p>In the end, it took us ~9h to develop the backend as a serverless function consisting of mainly of a 40 line JavaScript file we had to develop in the in-browser “editor” that Azure offers as all the other approaches we tried didn&#39;t work out and we ended up abandoning them. Once again: 9 hours for 40 lines of JS code stitching together three APIs...that is insane. (Btw at 3 am we decided to switch to GCP (google cloud platform) and that did the job in about 45 minutes)</p>

<p>So for sure we did things wrong and it could have been done faster, but this blog post is about the hard onboarding and overall bad structure of Azure. Please also keep in mind that Azure is still in a more-or-less early stage and not all of it is broken. In the following, I will walk you through the timeline of this disaster and suggestions I would have in mind to fix some of the most confusing steps. Actually, I will try to avoid these mistakes in my own future projects, so thanks Microsoft by showing me a way how not to do things xD</p>

<p>Just a bit more background: My partner in the backend had some experience with GCP and I do most of my current projects with AWS, so we did know how things work there...couldn&#39;t be too hard to transfer that knowledge to the Azure platform.</p>

<h3 id="start-of-the-project" id="start-of-the-project">Start of the project</h3>

<p>So first of all creating a new Azure account, that is not that hard and after entering credit card info you get 100$ of free credit. I actually like how Microsoft solved that here: You have two plans. You start with the 100$ free tier and if you spend all of that money you <strong>manually</strong> have to change to the pay-as-you-go plan. So that protects you of opening up an account, doing some testing, forgetting about it and then a month later you get a huge bill (happened to me with AWS). So that is nice for protecting new users that just start to test the system. Good job here Microsoft!</p>

<p>After setting up the account I created a new project and added some of the resources we needed. Creating a serverless function I recognized the tag “(Preview)” on the function I created but didn&#39;t think more about it...but actually, that sign should be something like <em>Experimental/Do not use/Will most likely not work properly</em>. We created a Python serverless function (apparently Python functions are still beta there) and tried to get some code in there.</p>

<p>There are three ways to get code into an azure function:</p>
<ul><li>Web “editor”</li>
<li>Azure CLI</li>
<li>VS Code</li></ul>

<p>...for full-featured functions. As we selected the experimental/beta/preview functionality Python we only had the latter two options. Not that bad as it is the same for AWS and I am used to deploying my code via the AWS cmd...shouldn&#39;t be way harder with Azure.</p>

<p><strong>My suggesting: Do not do publish functionality that is obviously not ready yet. Do internal testing instead of using your users for that task.</strong></p>

<h3 id="azure-plugins-for-vs-code" id="azure-plugins-for-vs-code">Azure plugins for VS code</h3>

<p>Microsoft overs a wide range of VS code plugins for Azure. As that is my main editor anyways I wanted to give them a try. So for the functionality of serverless functions, you need the functions plugin and about 9 other mandatory ones that are some sort of base plugins. 50MB and three VS Code crashes later the required plugins were finally installed properly. The recommended login method did not work and I had to choose the method of authenticating via the browser instead. Not that big of a deal, but as they recommend the inline method one would think that should work. (Didn&#39;t work for the other folks in my team either...so it had nothing to do with my particular machine)</p>

<p>You would think that 500MB should be enough for finally being able to deploy some code...but you still need 200MB more for the Azure cli that is required for the plugins to work properly.</p>

<p>Finally having installed all of it you can see all your Azure functions and resources in VS code. I started to get a bit excited as it looked like from now on the development would be straight forward and easier as I am used to from AWS.</p>

<p>But that 700mb of code did not work properly....the most important function “deploy” failed without any detailed error message...AAAAAAARRRG. Why do I have to install all that crap and then it can&#39;t do the most basic task it has to do: get my code into their cloud.</p>

<p><strong>Keep your tooling modular and try to do fewer things, but do them right</strong></p>

<h3 id="code-templates" id="code-templates">Code templates</h3>

<p>A nice idea is that on creating a new serverless function Azure greets you with a basic boilerplate code example showing you how to handle the basic data interfaces.</p>

<p>It might have been because we selected the alpha functionality “Python”, that we didn&#39;t actually get Python code here but JavaScript. So your function is prepopulated with code that is not able to run because it is the wrong programming language. We were lucky and recognized that right away, but you could get really confusing error messages here if you then start developing in JS but actually having a Python runtime.</p>

<p><strong>Better no boilerplate code than one in the wrong programming language</strong></p>

<h3 id="but-at-least-it-is-colorful" id="but-at-least-it-is-colorful">But at least it is colorful</h3>

<p>So next try with the Azure CLI. The first thing that you recognize is that the CLI has all sorts of different colors...but that does not help if you are annoyed and want to get things done.</p>

<p>That is a thing you also see in the Azure web interface...it has got quite a few UX issues but they do have over five color themes that you can choose from for styling the UI...Microsoft I&#39;m not sure if you set your priorities right here ;)</p>

<p>Also, the CLI did not get us where we wanted....either due to our own incompetence or due to the CLI itself, no clue. Either way, I would blame Azure as it is their job to help developers onboard and at least get basic tasks (we still only want to deploy a simple “hello world”) done in an acceptable time.</p>

<p><strong>Focus less on making your UI shine in every color of the rainbow and try to improve documentation and onboarding examples</strong></p>

<h3 id="full-ownership-of-a-resource-still-does-not-give-you-full-privileges" id="full-ownership-of-a-resource-still-does-not-give-you-full-privileges">Full ownership of a resource still does not give you full privileges</h3>

<p>After finally being able to deploy at least the “hello world” we wanted to go a step further...work concurrently on that project. Yes until now we mainly did pair programming on a single machine.</p>

<p>As I was the owner of that resource I also wanted to give my teammate full access to it, so that he could work on the resource and add functions if required. I granted him “owner” access rights (the highest that were available) but he was still not able to work properly with that function. In the web UI it did work more or less but than again in VS code there&#39;s no chance to do anything (adding a function or deploying it). I ended up doing something that goes against everything I learned about security: I logged in with my credentials on his machine.</p>

<p>So imagine yourself now already sitting in front of your laptop for about 4 ½ hours and you did not manage to do any of the actual work you set out to do.</p>

<h3 id="ditching-azure-functions-and-switching-to-gcp" id="ditching-azure-functions-and-switching-to-gcp">Ditching Azure Functions and switching to GCP</h3>

<p>That was the moment when we ditched the idea of doing the backend as an Azure function. We switched to GCP where we started all over again. As I&#39;ve never worked with that platform either I expected a similar hard start, as I already had in the last few hours with Azure. But then about 25 minutes later we achieved more on GCP than with Azure until then.</p>

<p>Something both Azure and GCP do better than AWS is that they have the logs of a serverless function in the same window as the function itself. AWS has a different approach here and you have to change to the cloud logs when you want to get info about your function and how it worked. Props to both Google and Microsoft for solving this a lot better!</p>

<p><strong>Actually a hint for AWS: Give your user all controls and info at a single place</strong></p>

<h3 id="cognitive-services" id="cognitive-services">Cognitive services</h3>

<p>The prices you could win at the Hackathon were attached to using Azure and thereby we stuck to the cognitive services for doing the news search and the sentiment analysis. Overall the API is straight forward: Send your data and get the results back.</p>

<p><em>One thing we got told in a presentation and that you should keep in mind when using the cognitive services: You do not control the model and it could change at any moment in time. So if you use the cognitive services for productive use, you should continuously check that the API didn&#39;t change its behavior in a way that influences your product in a bad way. But most of the time it is still a lot cheaper and better than building the model yourself</em></p>

<p>The problem that we did have with the services were again authentication issues. Quite confusing some of the cognitive services (e.g. the sentiment analysis) have different API base URLs depending on where you register that cognitive service and others do not. As I assume they need that manual setting of data centers for a particular (unknown to me) reason. Indeed I would propose to have all the cognitive services bound to a location.</p>

<p>The news search, for example, is not bound to a location and so we had two different behaviors of the API base URLs in our so short and easy application:</p>
<ul><li>One URL for all locations.</li>
<li>Only a certain location is valid for your resource. If you point to a wrong API location you get an “unauthorized” as the response</li></ul>

<p>Pointing to the wrong location is pure incompetence on the developer side but it would help a lot if there would be a distinct error code/message for that scenario.</p>

<p><strong>Have the same base URL behavior for all cognitive services</strong></p>

<p><strong>Return some sort of &#39;wrong location&#39;-error if you have a valid API token but you are pointing to the wrong location</strong></p>

<h3 id="insufficiently-documented-sdks" id="insufficiently-documented-sdks">Insufficiently documented SDKs</h3>

<p>Azure offers SDKs for using their services. We gave the JS SDK for the cognitive services a try. Here we had both ups and downs: First, props to the developers coding the SDKs, as they are straight forward and do what they should. Even the code itself looks good...but why the hell do I have to look into the code of the SDKs to get all the options the functions offer? When you stick to the documentation provided via the GitHub readme or NPM you only get a fraction of the functionality. We were confused that Microsoft&#39;s own SKDs seemed not to be API complete. Looking into the code we saw they are actually API complete and do offer a lot more options than documented.</p>

<p><strong>Please Microsoft: Properly document your functionalities!</strong></p>

<p>IMO there must be deep problems with the internal release processes at Azure. It is not acceptable that an IT company that&#39;s been in the industry for so long allows itself such a basic mistake. You should not release your products (and I see the SDKs as such) without proper documentation.</p>

<h3 id="code-examples" id="code-examples">“Code Examples”</h3>

<p>During our trial and error period of trying to get the JS SDK running, we stumbled upon the quickstart guide for the cognitive services <a href="https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/quickstarts/node-analyze" rel="nofollow">Quickstart: Analyze a remote image using the REST API with Node.js in Computer Vision</a></p>

<p>Instead of using their own SDK and explaining how to use it they show you how to manually build an HTTP request in JS. Sure that can be helpful for new JS coders, but if you have an SDK for that particular reason...why are you not using it? Looks like the left hand is not knowing what the right-hand does.</p>

<p><strong>Stick to one way of doing things. If you have an SKD, also use it in your quickstart guides for being consistent</strong></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In the end, we did port the code back from GCP to an Azure function (again ~1h of work). We selected JS instead of Python and coded completely in the web UI...that did work. I now know how real Microsoft business developers do their daily business...never leave the web UI and just accept that life is hard.</p>

<p>Microsoft failed to deliver a adequate experience here and lost me as a potential customer. How can it be that I was able to do the same things in a fraction of the time in GCP? (And keep in mind: it was already 3am in the morning, I was super tired and I also never worked with GCP before)</p>

<p>None of the three major players are perfect and sure I understand it is hard to deliver fast and keeping good quality in this highly competitive market. But maybe actually going the step further will help to win in the end.</p>

<p><em>Once again: This is me only rating the onboarding experience of Azure in particular! No general opinion on Microsoft.</em></p>

<p><strong>Last one: The Azure web UI didn&#39;t work in Chrome. So if you have issues with that, Firefox did the trick for us ;)</strong></p>
]]></content:encoded>
      <guid>https://simonfrey.writeas.com/working-with-microsoft-azure-for-20-hours-and-why-i-will-not-use-it-again</guid>
      <pubDate>Tue, 16 Apr 2019 16:44:41 +0000</pubDate>
    </item>
    <item>
      <title>Women in front of laptop</title>
      <link>https://simonfrey.writeas.com/why-every-saas-company-should-reevaluate-their-live-chat-support-in-2019?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Women in front of laptop&#xA;&#xA;Why every SaaS company should reevaluate their live chat strategy in 2019&#xA;&#xA;What would you say, if I tell you that you leave over 70% off your potential customers on the table, because you do not have a live chat on your website? FurstPerson discovered exactly that: 77% of your customers are very unlikely to make a purchase if we do not offer a live chat. Wow! That huge number should be enough to convince everyone to directly search for a live chat solution that gets that customers back.&#xA;&#xA;The technical setup of a website live chat is the easy part. There are other points you have to think about and I hope that this article helps you to reevaluate your live chat strategy.&#xA;&#xA;Customers demand live chat support&#xA;&#xA;But email support worked now for a decade, why are customers becoming  so demanding for a live chat support channel?&#xA;&#xA;We are living in a time where we can get everything we want in no time. Do you sometimes find yourself being annoyed that you have to wait for the next day until your amazon delivery is at your door? Or when your spouse does not reply within minutes? Customers are as impatient as you are yourself!&#xA;&#xA;When your (potential) customers have a question, they want to have it answered in a few minutes instead of waiting for a day until their email tickets are finally read. In 2019 customers do not excuse a slow support channel anymore!&#xA;&#xA;So if competitors are able to answer the customers questions faster than you are, they will outperform you in sales and overall customer satisfaction. &#xA;&#xA;We as entrepreneurs have to adapt to this new customer requirement to stay in the game! Every site should have a live chat possibility &#xA;&#xA;A bad experience is worse than no live chat at all&#xA;&#xA;Only one thing that is hurting a business more than no live chat is a bad live chat experience. You are not done with just putting a widget on your page and then configuring it to send you an email. I hate it if I open the live chat window, type in my question and then after 1 minute a bot tells me that the team is away and that I please should use some weird email form. Why the hell is there a live chat window at all if no one is answering in less than 2 minutes? If you use your chat that way, please send you customers directly to the email form and tell them how long they will have to wait on average for a response. You are than not 100% in line with the modern live chat situation, but it is still way better than after all having an email form that looks like a live chat window!&#xA;&#xA;If your live chat is just another design for your email form, please do not use that live chat at all&#xA;&#xA;Live chat support needs (wo-)manpower&#xA;&#xA;Bots, Artificial Intelligence and Machine Learning are very nice for live chat company in their sales pitch but after all you always need a human being on the other side of your customer live chat. Technology will help you to a certain point, but the biggest value you create is your customer feeling appreciated. You show, that you and your company do care so much for them that there is always a human being happy to help with all their issues.&#xA;&#xA;Did you ever have the situation a friend of yours told you about a company that helped super fast with an issue? I&#39;m 100% sure that friend still is a customer at that company. We humans want to feel valued and if someone does that, we will stick to that person/company.&#xA;&#xA;Value your customers with human support agents instead of heartless bots. This investment will definitely pay back!&#xA;&#xA;Your live chat is your most honest feedback channel&#xA;&#xA;Compared to dedicated customer services and feedback forms your live chat can and will be your most honest feedback channel. You will experience the problems your customer have with the product in the second they stumble upon it. And yes...sometimes just reading an FAQ or so would have helped your customer to circumvent that problem, but that is not how customers function. They want your product to make their life easier, and they will not work trough extensive manuals to understand how to use your product.&#xA;&#xA;If you get the same questions over and over again, you definitely should think about changing your product at that particular point. And the best of all: You can just ask your customer during the live chat session what would help them to circumvent that problem in the future. They will feel valued and you get a customer survey for free ;)&#xA;&#xA;Live chat helps you to understand just in time what problems your customer stumble upon. Use that feedback for improving your product&#xA;&#xA;With great power comes great responsibility&#xA;&#xA;One thing at the end. There are awesome features for pull marketing within some extensive live chat solutions, but please try to use them wisely. A lot of your potential customer will run away from your website screaming if you bombard them with popups and windows: &#34;Here is our newsletter&#34;, &#34;Get 20% off&#34;, &#34;Start a live chat with us&#34;&#xA;&#xA;You can use certain triggers if you experience your customer stuck somewhere. Maybe they are hovering for 20 seconds over the pricing page or are extensively scrolling up and down on your page. Then it is a good point to offer them live chat support by automatically opening the live chat window. Just that they are on your site for 2 seconds is no valid reason!&#xA;&#xA;Try to only automatically open the chat window if your customer seems stuck. Please do not annoy them with useless popups. They will find the live chat in the bottom right corner when they need it&#xA;&#xA;Live chat solution for solopreneurs&#xA;&#xA;Full disclosure: I am the co-founder of gramchat&#xA;&#xA;With gramchat we tried to solve the bespoken issues and create a live chat solution that helps you to serve your customers best. Gramchat directly send the customer messages to your Telegram Messenger and from there you can answer them directly - no extra app required. Gym, beer with friends or during your day-job, help your customers where ever you are.&#xA;&#xA;With the Telegram integration we try to solve the &#34;live chat is just an email form&#34;-problem. With gramchat you are able to answer your customer within the important first 90 seconds.&#xA;&#xA;But enough advertisement! There are different great live chat solutions out there and you should pick the one that suits you the best. For a small team or as solopreneur, gramchat may be your perfect fit :D&#xA;&#xA;I would love if you give it a try =  gramchat.me&#xA;&#xA;Wish you all an awesome time!&#xA;Simon&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://simon-frey.eu/static/womenLaptop.jpg" alt="Women in front of laptop"/></p>

<h1 id="why-every-saas-company-should-reevaluate-their-live-chat-strategy-in-2019" id="why-every-saas-company-should-reevaluate-their-live-chat-strategy-in-2019">Why every SaaS company should reevaluate their live chat strategy in 2019</h1>

<p>What would you say, if I tell you that you leave over 70% off your potential customers on the table, because you do not have a live chat on your website? <a href="https://www.furstperson.com/blog/8-essential-live-chat-customer-support-statistics" rel="nofollow">FurstPerson</a> discovered exactly that: 77% of your customers are very unlikely to make a purchase if we do not offer a live chat. Wow! That huge number should be enough to convince everyone to directly search for a live chat solution that gets that customers back.</p>

<p>The technical setup of a <a href="https://gramchat.me" rel="nofollow">website live chat</a> is the easy part. There are other points you have to think about and I hope that this article helps you to reevaluate your live chat strategy.</p>

<h2 id="customers-demand-live-chat-support" id="customers-demand-live-chat-support">Customers demand live chat support</h2>

<p><em>But email support worked now for a decade, why are customers becoming  so demanding for a live chat support channel?</em></p>

<p>We are living in a time where we can get everything we want in no time. Do you sometimes find yourself being annoyed that you have to wait for the next day until your amazon delivery is at your door? Or when your spouse does not reply within minutes? Customers are as impatient as you are yourself!</p>

<p>When your (potential) customers have a question, they want to have it answered in a few minutes instead of waiting for a day until their email tickets are finally read. In 2019 customers do not excuse a slow support channel anymore!</p>

<p>So if competitors are able to answer the customers questions faster than you are, they will outperform you in sales and overall customer satisfaction.</p>

<p><strong>We as entrepreneurs have to adapt to this new customer requirement to stay in the game! Every site should have a live chat possibility</strong></p>

<h2 id="a-bad-experience-is-worse-than-no-live-chat-at-all" id="a-bad-experience-is-worse-than-no-live-chat-at-all">A bad experience is worse than no live chat at all</h2>

<p>Only one thing that is hurting a business more than no live chat is a bad live chat experience. You are not done with just putting a widget on your page and then configuring it to send you an email. I hate it if I open the live chat window, type in my question and then after 1 minute a bot tells me that the team is away and that I please should use some weird email form. Why the hell is there a live chat window at all if no one is answering in less than 2 minutes? If you use your chat that way, please send you customers directly to the email form and tell them how long they will have to wait on average for a response. You are than not 100% in line with the modern live chat situation, but it is still way better than after all having an email form that looks like a live chat window!</p>

<p><strong>If your live chat is just another design for your email form, please do not use that live chat at all</strong></p>

<h2 id="live-chat-support-needs-wo-manpower" id="live-chat-support-needs-wo-manpower">Live chat support needs (wo-)manpower</h2>

<p>Bots, Artificial Intelligence and Machine Learning are very nice for live chat company in their sales pitch but after all you always need a human being on the other side of your customer live chat. Technology will help you to a certain point, but the biggest value you create is your customer feeling appreciated. You show, that you and your company do care so much for them that there is always a human being happy to help with all their issues.</p>

<p>Did you ever have the situation a friend of yours told you about a company that helped super fast with an issue? I&#39;m 100% sure that friend still is a customer at that company. We humans want to feel valued and if someone does that, we will stick to that person/company.</p>

<p><strong>Value your customers with human support agents instead of heartless bots. This investment will definitely pay back!</strong></p>

<h2 id="your-live-chat-is-your-most-honest-feedback-channel" id="your-live-chat-is-your-most-honest-feedback-channel">Your live chat is your most honest feedback channel</h2>

<p>Compared to dedicated customer services and feedback forms your live chat can and will be your most honest feedback channel. You will experience the problems your customer have with the product in the second they stumble upon it. And yes...sometimes just reading an FAQ or so would have helped your customer to circumvent that problem, but that is not how customers function. They want your product to make their life easier, and they will not work trough extensive manuals to understand how to use your product.</p>

<p>If you get the same questions over and over again, you definitely should think about changing your product at that particular point. And the best of all: You can just ask your customer during the live chat session what would help them to circumvent that problem in the future. They will feel valued and you get a customer survey for free ;)</p>

<p><strong>Live chat helps you to understand just in time what problems your customer stumble upon. Use that feedback for improving your product</strong></p>

<h2 id="with-great-power-comes-great-responsibility" id="with-great-power-comes-great-responsibility">With great power comes great responsibility</h2>

<p>One thing at the end. There are awesome features for pull marketing within some extensive live chat solutions, but please try to use them wisely. A lot of your potential customer will run away from your website screaming if you bombard them with popups and windows: “Here is our newsletter”, “Get 20% off”, <em>“Start a live chat with us”</em></p>

<p>You can use certain triggers if you experience your customer stuck somewhere. Maybe they are hovering for 20 seconds over the pricing page or are extensively scrolling up and down on your page. Then it is a good point to offer them live chat support by automatically opening the live chat window. <strong>Just that they are on your site for 2 seconds is no valid reason!</strong></p>

<p><strong>Try to only automatically open the chat window if your customer seems stuck. Please do not annoy them with useless popups. They will find the live chat in the bottom right corner when they need it</strong></p>

<h2 id="live-chat-solution-for-solopreneurs" id="live-chat-solution-for-solopreneurs">Live chat solution for solopreneurs</h2>

<p><em>Full disclosure: I am the co-founder of <a href="https://gramchat.me" rel="nofollow">gramchat</a></em></p>

<p>With <a href="https://gramchat.me" rel="nofollow">gramchat</a> we tried to solve the bespoken issues and create a live chat solution that helps you to serve your customers best. Gramchat directly send the customer messages to your Telegram Messenger and from there you can answer them directly – no extra app required. <strong>Gym, beer with friends or during your day-job, help your customers where ever you are.</strong></p>

<p>With the Telegram integration we try to solve the “live chat is just an email form”-problem. With gramchat you are able to answer your customer within the important first 90 seconds.</p>

<p>But enough advertisement! There are different great live chat solutions out there and you should pick the one that suits you the best. For a small team or as solopreneur, gramchat may be your perfect fit :D</p>

<p>I would love if you give it a try =&gt; <a href="https://gramchat.me" rel="nofollow">gramchat.me</a></p>

<p>Wish you all an awesome time!
Simon</p>
]]></content:encoded>
      <guid>https://simonfrey.writeas.com/why-every-saas-company-should-reevaluate-their-live-chat-support-in-2019</guid>
      <pubDate>Sun, 17 Mar 2019 20:48:55 +0000</pubDate>
    </item>
    <item>
      <title>Off-line developing during an intercity trip in 2019</title>
      <link>https://simonfrey.writeas.com/off-line-developing-during-an-intercity-trip-in-2019?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[No wifi on laptop image&#xA;&#xA;As you might imagine from the title, I am at the moment of writing this article sitting in a train from Berlin to Hamburg. For those of you who have never been in Germany...we do have WIFI on the trains here, but contrary to what you might expect it is really bad. (And if it is sometimes fast enough, you get only 200mb of traffic &lt;= Thanks to mac address randomization that can be bypassed)&#xA;&#xA;Wait, what? Bad WIFI on trains in the first world industry nation Germany? Yes, even during my travel on a train in Thailand I had way better WIFI than I ever experienced in the German trains. There are two main factors for that: &#xA;&#xA;Bad mobile network overall...if you leave the bigger cities you most of the time do not even have Edge (yes kids, slower than 2G) or a mobile network connection at all. So sad!&#xA;Cheap hardware in the trains. Actually the modems in the trains are standard 3G modems you may also purchase as mobile hot-spot device. Sure they are a bit more powerful, but they are not made for this special use case: Connection to new base stations in at a high ratio. It actually is a quite hard technical challenge to have a modem do this on a high speed train. But we have 2019 ...thinking about sending people to mars...and as we can see in other countries this problem is apparently solved. Maybe some more money would be good invested here.&#xA;&#xA;But enough ranting about the WIFI in here (that is BTW current non existent) &#xA;&#xA;OK sorry one more thought: Looking around me I see a lot of people in nice suites working on there laptops. Imagine them earning 60€/hour and they need double the time for a task, because the WIFI is so weak. Assuming there are 100 (conservative calculation) of such people on a train. So during this single trip from Berlin to Hamburg (2h) there is 60€  100  2 = 12 000€ of wasted human capital....better not tell that any company paying their employees the train ride and the &#34;work time&#34; during this trip.&#xA;&#xA;Actually this article is about tech&#xA;&#xA;I experience this not the first time, but why am I triggered this time that much, that I decided to write a blog post about this topic? As web developer I am currently working on a live chat project (gramchat.me - please be kind, the landing page would be finished if I actually could work here) where I wanted to finish the landing page &amp; documentation during this trip.&#xA;&#xA;Now I experience myself sitting here and my laptop, normally the device paying my rent, is not more than a dump black box....close to every work flow I have does requires the Internet, I can&#39;t work off-line. grrrrr &#xA;&#xA;How could that happen? Normally I am always at places with good WIFI or mobile network (Berlin Big City Life) and so some bad habits sneaked in:&#xA;&#xA;Development work&#xA;  Google fonts&#xA;  Payment gateway that needs to be configured&#xA;  Documentation (How could anyone write software before stackoverflow?)&#xA;  Package tools for just in time downloading of dependencies&#xA;  Github Issues and Board for organization&#xA;  Backend infrastructure is build on AWS lambda (can&#39;t test that offline)&#xA;Entertainment&#xA;  Movies are on netflix&#xA;  Music is on spotify&#xA;  I read mostly blog posts and web articles (via Hackero ;))&#xA;Communication&#xA;  Telegram/WhatsApp/Email&#xA;Information&#xA;  I am struggling to write this article as non-native speaker as I can&#39;t use Google translate&#xA;...and so on&#xA;&#xA;------&#xA;&#xA;Short interruption: Because of other issues I had to change to another (slower) train. This one does not have WIFI at all...so now next level shit.&#xA;&#xA;------&#xA;&#xA;I sit here and have basically three options what to do: &#xA;&#xA;Compose electronic music with LMMS, what I downloaded a few weeks ago but have no clue how to use it :&#39;D&#xA;Code something in Go. Thanks Goland for your awesome build in standard lib documentation!&#xA;Write this article ranting about the German train situation and about myself of being so depended on a resource I thought about as natural as air&#xA;&#xA;So here I am writing the article :D&#xA;&#xA;Prevent such a situation in the future&#xA;&#xA;So the biggest fail, is me not being prepared for off-line usage of my devices. So what will I do to prevent this in the future? Technical problems need technical solutions:&#xA;&#xA;Entertainment&#xA;  Music: Have at least some of my favorite playlists available offline&#xA;  Movies: Actually I see it not as a big problem not binch-watching for some hours =  Keeps me focused on working&#xA;  Get a off-line &#34;read it later&#34; system. A while ago I used wallabag and will reinstall it on all my devices.&#xA;Communication&#xA;  You actually can not do much about it...so nothing to improve here&#xA;  If you do not have an off-line usable email and messaging client you should get yourself one. (Telegram has a nice desktop standalone) It is nice to at least be able to search trough archived emails/chats&#xA;Information&#xA;  Off-line dictionary it is&#xA;  Is there a Firefox/Chrome Plugin that save all the web pages I visit to an off-line storage? So that I can go back in my history and access the pages I visited before...if not I might code one.&#xA;Development work&#xA;  There are a lot different off-line code documentation systems. I did choose zeal as it works on Linux and is standalone (the other ones work in the browser and as I most of the time surf in private mode they would not work for me, as I wipe the local storage at least every few days)&#xA;  Off-line PHP server =  Was actually quite easy. Did you know PHP has a build-in server? php -S localhost:8080&#xA;  AWS lambda offline testing framework? No clue how to this yet...maybe a good topic for another blogpost&#xA;  There are actually some clients for github with offline issue support. I will give them a try&#xA;  Cache/save web resources locally. Maybe a good idea overall..better not include Google as dependency in your project as they will abuse that data you send them with every visitor&#xA;  There is an (sadly old) StackOverflow dump, that could end up in some tool to search trough it...would be amazing. (but maybe will take a lot of disk space)&#xA;&#xA;------&#xA;&#xA;Oh girl, another thing came up: I have to show my train ticket, wich is a PDF attached to an email...that I never downloaded. What is going on here...my life goes nuts without Internet. Download your important tickets/documents&#xA;&#xA;------&#xA;&#xA;So overall this trip showed me how depending I am on the Internet and that I should change that. Please see this post as work in progress as I will update and add off-line tools when I get to know them and have more experience with them.&#xA;&#xA;Overall there is one main learning: Download stuff instead of only opening it in the browser. (Same here with my university pdfs...never did download them for offline use, so no research for me no)&#xA;&#xA;If someone was in this situation him or herself and found out other tools that helped I would love if you share them with me, so that I can introduce them into my stack and update this article.&#xA;&#xA;So now I hope that the Edge Internet connection I have on my mobile Hotspot right now will be enough to upload this article :&#39;D&#xA;&#xA;Wish you an awesome (online) time!&#xA;&#xA;Simon&#xA;&#xA;p.S. Another thing I found: Check what applications are using Internet on your machine, so that if you only have low bandwidth this important resource does not get sucked away by an open Skype or so.&#xA;&#xA;------&#xA;&#xA;Did you like this post? &#xA;&#xA;Donate: Donate Button or Patreon&#xA;&#xA;Feedback:  Email_ &#xA;&#xA;------&#xA;&#xA;RSS Feed - This work is licensed under Creative Commons Attribution 4.0 International License&#xA;&#xA;No WIFI Icon made by Freepik from Flaticon is licensed by CC 3.0 BY&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://simon-frey.eu/static/no_wifi.jpg" alt="No wifi on laptop image"/></p>

<p>As you might imagine from the title, I am at the moment of writing this article sitting in a train from Berlin to Hamburg. For those of you who have never been in Germany...we do have WIFI on the trains here, but contrary to what you might expect it is really bad. (And if it is sometimes fast enough, you get only 200mb of traffic &lt;= Thanks to mac address randomization that can be bypassed)</p>

<p>Wait, what? Bad WIFI on trains in the first world industry nation Germany? Yes, even during my travel on a train in Thailand I had way better WIFI than I ever experienced in the German trains. There are two main factors for that:</p>
<ul><li>Bad mobile network overall...if you leave the bigger cities you most of the time do not even have Edge (yes kids, slower than 2G) or a mobile network connection at all. So sad!</li>
<li>Cheap hardware in the trains. Actually the modems in the trains are standard 3G modems you may also purchase as mobile hot-spot device. Sure they are a bit more powerful, but they are not made for this special use case: Connection to new base stations in at a high ratio. It actually is a quite hard technical challenge to have a modem do this on a high speed train. But we have 2019 ...thinking about sending people to mars...and as we can see in other countries this problem is apparently solved. Maybe some more money would be good invested here.</li></ul>

<p><strong>But enough ranting about the WIFI in here (that is BTW current non existent)</strong></p>

<p>OK sorry one more thought: Looking around me I see a lot of people in nice suites working on there laptops. Imagine them earning 60€/hour and they need double the time for a task, because the WIFI is so weak. Assuming there are 100 (conservative calculation) of such people on a train. So during this single trip from Berlin to Hamburg (2h) there is 60€ * 100 * 2 = <strong>12 000€</strong> of wasted human capital....better not tell that any company paying their employees the train ride and the “work time” during this trip.</p>

<h2 id="actually-this-article-is-about-tech" id="actually-this-article-is-about-tech">Actually this article is about tech</h2>

<p>I experience this not the first time, but why am I triggered this time that much, that I decided to write a blog post about this topic? As web developer I am currently working on a live chat project (<a href="https://gramchat.me" rel="nofollow">gramchat.me</a> – please be kind, the landing page would be finished if I actually could work here) where I wanted to finish the landing page &amp; documentation during this trip.</p>

<p>Now I experience myself sitting here and my laptop, normally the device paying my rent, is not more than a dump black box....close to every work flow I have does requires the Internet, <strong>I can&#39;t work off-line</strong>. grrrrr</p>

<p>How could that happen? Normally I am always at places with good WIFI or mobile network (Berlin Big City Life) and so some bad habits sneaked in:</p>
<ul><li><strong>Development work</strong>
<ul><li>Google fonts</li>
<li>Payment gateway that needs to be configured</li>
<li>Documentation (How could anyone write software before stackoverflow?)</li>
<li>Package tools for just in time downloading of dependencies</li>
<li>Github Issues and Board for organization</li>
<li>Backend infrastructure is build on AWS lambda (can&#39;t test that offline)</li></ul></li>
<li><strong>Entertainment</strong>
<ul><li>Movies are on netflix</li>
<li>Music is on spotify</li>
<li>I read mostly blog posts and web articles (via <a href="https://hackero.co" rel="nofollow">Hackero</a> ;))</li></ul></li>
<li><strong>Communication</strong>
<ul><li>Telegram/WhatsApp/Email</li></ul></li>
<li><strong>Information</strong>
<ul><li>I am struggling to write this article as non-native speaker as I can&#39;t use Google translate</li></ul></li>
<li><strong>...and so on</strong></li></ul>

<hr/>

<p>Short interruption: Because of other issues I had to change to another (slower) train. This one does not have WIFI at all...so now next level shit.</p>

<hr/>

<p>I sit here and have basically three options what to do:</p>
<ul><li><strong>Compose electronic music</strong> with LMMS, what I downloaded a few weeks ago but have no clue how to use it :&#39;D</li>
<li><strong>Code something in Go</strong>. Thanks Goland for your awesome build in standard lib documentation!</li>
<li><strong>Write this article</strong> ranting about the German train situation and about myself of being so depended on a resource I thought about as natural as air</li></ul>

<p>So here I am writing the article :D</p>

<h2 id="prevent-such-a-situation-in-the-future" id="prevent-such-a-situation-in-the-future">Prevent such a situation in the future</h2>

<p><strong>So the biggest fail, is me not being prepared for off-line usage of my devices.</strong> So what will I do to prevent this in the future? Technical problems need technical solutions:</p>
<ul><li><strong>Entertainment</strong>
<ul><li>Music: Have at least some of my favorite playlists available offline</li>
<li>Movies: Actually I see it not as a big problem not binch-watching for some hours =&gt; Keeps me focused on working</li>
<li>Get a off-line “read it later” system. A while ago I used <a href="https://www.wallabag.it" rel="nofollow">wallabag</a> and will reinstall it on all my devices.</li></ul></li>
<li><strong>Communication</strong>
<ul><li>You actually can not do much about it...so nothing to improve here</li>
<li><strong>If you do not have an off-line usable email and messaging client you should get yourself one. (Telegram has a nice desktop standalone) It is nice to at least be able to search trough archived emails/chats</strong></li></ul></li>
<li><strong>Information</strong>
<ul><li>Off-line dictionary it is</li>
<li>Is there a Firefox/Chrome Plugin that save all the web pages I visit to an off-line storage? So that I can go back in my history and access the pages I visited before...if not I might code one.</li></ul></li>
<li><strong>Development work</strong>
<ul><li>There are a lot different off-line code documentation systems. I did choose <a href="https://zealdocs.org/" rel="nofollow">zeal</a> as it works on Linux and is standalone (the other ones work in the browser and as I most of the time surf in private mode they would not work for me, as I wipe the local storage at least every few days)</li>
<li>Off-line PHP server =&gt; Was actually quite easy. Did you know PHP has a build-in server? <code>php -S localhost:8080</code></li>
<li>AWS lambda offline testing framework? No clue how to this yet...maybe a good topic for another blogpost</li>
<li>There are actually some clients for github with offline issue support. I will give them a try</li>
<li>Cache/save web resources locally. Maybe a good idea overall..better not include Google as dependency in your project as they will abuse that data you send them with every visitor</li>
<li>There is an (sadly old) StackOverflow dump, that could end up in some tool to search trough it...would be amazing. (but maybe will take a lot of disk space)</li></ul></li></ul>

<hr/>

<p>Oh girl, another thing came up: I have to show my train ticket, wich is a PDF attached to an email...that I never downloaded. What is going on here...my life goes nuts without Internet. <strong>Download your important tickets/documents</strong></p>

<hr/>

<p>So overall this trip showed me how depending I am on the Internet and that I should change that. Please see this post as work in progress as I will update and add off-line tools when I get to know them and have more experience with them.</p>

<p>Overall there is one main learning: <strong>Download stuff instead of only opening it in the browser.</strong> (Same here with my university pdfs...never did download them for offline use, so no research for me no)</p>

<p>If someone was in this situation him or herself and found out other tools that helped I would love if you share them with me, so that I can introduce them into my stack and update this article.</p>

<p>So now I hope that the Edge Internet connection I have on my mobile Hotspot right now will be enough to upload this article :&#39;D</p>

<p>Wish you an awesome (online) time!</p>

<p>Simon</p>

<p>p.S. Another thing I found: Check what applications are using Internet on your machine, so that if you only have low bandwidth this important resource does not get sucked away by an open Skype or so.</p>

<hr/>

<p>Did you like this post?</p>

<p><strong>Donate:</strong> <em><a href="https://liberapay.com/l1am0" rel="nofollow"><img src="https://liberapay.com/assets/widgets/donate.svg" alt="Donate Button"/></a></em> or <a href="https://patreon.com/simonfrey" rel="nofollow"><img src="https://c5.patreon.com/external/logo/become_a_patron_button.png" alt="Patreon"/></a></p>

<p><strong>Feedback:</strong>  <a href="mailto:meet@simon-frey.eu" rel="nofollow">Email</a>_</p>

<hr/>

<p><a href="https://blog.simon-frey.eu/feed" rel="nofollow">RSS Feed</a> – This work is licensed under <a href="http://creativecommons.org/licenses/by/4.0/" rel="nofollow">Creative Commons Attribution 4.0 International License</a></p>

<p>No WIFI Icon made by <a href="https://www.freepik.com/" rel="nofollow">Freepik</a> from <a href="https://www.flaticon.com/" rel="nofollow">Flaticon</a> is licensed by <a href="http://creativecommons.org/licenses/by/3.0/" rel="nofollow">CC 3.0 BY</a></p>
]]></content:encoded>
      <guid>https://simonfrey.writeas.com/off-line-developing-during-an-intercity-trip-in-2019</guid>
      <pubDate>Fri, 15 Feb 2019 07:47:56 +0000</pubDate>
    </item>
    <item>
      <title>[Go as in Golang] Standard net/http config will break your production environment</title>
      <link>https://simonfrey.writeas.com/go-as-in-golang-standard-net-http-config-will-break-your-production?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[or a less click baity title: An introduction to net/http timeouts&#xA;&#xA;Source: https://commons.wikimedia.org/wiki/File:Gophercolor.jpg&#xA;&#xA;------&#xA;&#xA;First of all, as you may already recognized from the titles, this blogpost is standing on the shoulder of giants. The following two blog posts inspired me to revise the net/http timeouts, as the linked blog post are at some parts outdated:&#xA;&#xA;The complete guide to golang net/http timeouts&#xA;Don&#39;t use Go&#39;s default http client&#xA;&#xA;Give them a visit after  you read this post and see how things have changed in such a short time ;)&#xA;&#xA;------&#xA;&#xA;Why you should not use the standard net/http config?&#xA;&#xA;The go core team decided to not set any timeouts at all on the standard net/http client or server config and that is a real sane decision. Why?&#xA;&#xA;To not break things! Timeouts are a highly individual setting and in more of the cases a to short timeout will break your application with a unexplainable error, than a too long one (or in GOs case none) would.&#xA;&#xA;Imaging following different use cases of the go net/http client:&#xA;&#xA;1) Downloading a big file (10GB) from a webserver. With an average (german) internet connection this would take round about five minutes.&#xA;&#xA;=  The timeout for the connection should be longer than five minutes, because anything less would break your application by canceling the download in the middle (or third, or whatever percentage) of the file.&#xA;&#xA;2) Accessing a REST API with a lot of concurrent connections. This normally should take at most a few seconds per connection&#xA;&#xA;=  The timeout should be not more than 10 seconds, as anything that takes longer would mean, that you are keeping that connection open for to long and starving your application as it only can have X (depending on system, configuration and coding) open connections. So if that REST API you access is broken in any way that it keeps the connections open without sending you the data you need, you want to prevent it from doing so.&#xA;&#xA;So, for what scenario should the standard lib be optimized? Trust me, you do not want to decide that for millions of developers around the globe.&#xA;&#xA;That is why we have to set the timeouts, so that they fit our use case!&#xA;&#xA;So never use the standard go http client/server! It will break your production system! (Happened to me, as I forgot my own rule ones)&#xA;&#xA;What type of timeouts occur in a HTTP connection?&#xA;&#xA;I assume you have a basic understanding of the TCP and HTTP protocols. (If not, Wikipedia is a good starting point for that)&#xA;&#xA;There are mainly three different categories of timeouts that can occur:&#xA;&#xA;During connection setup&#xA;During receiving/sending the header information&#xA;During receiving/sending the body&#xA;&#xA;As you already might expect from our two examples in the introduction, the timeout that we have to care about the most is the one regarding the body. The other ones are most of the time shorter and similar in every setup. (E.g. there is only a certain amount of headers that will be send) We still have to think and care about timeouts in the header as there are certain DOS attacks that play with malformed headers, or never closing a header (SLOWLORIS DOS attack) but we will come to this in a later point of the post.&#xA;&#xA;You should at least do this: The easy path&#xA;&#xA;net/http gives you the possibility to set a timeout for the complete transfer of data (setup, headers, body). It is not as fine  grained as with the later bespoken solutions, but it will help you to prevent the most obvious problems:&#xA;&#xA;Connection starving&#xA;Malformed header attacks&#xA;&#xA;So you should at least use this timeouts on every go net/http client/server you use!&#xA;&#xA;Client&#xA;&#xA;The following example client, gives you a complete timeout of 5 seconds. &#xA;&#xA;c := &amp;http.Client{&#xA;&#x9;Timeout: 5  time.Second,&#xA;}&#xA;c.Get(&#34;https://blog.simon-frey.eu/&#34;)&#xA;&#xA;If the connection is still open, it will be canceled with net/http: request canceled (Client.Timeout exceeded while reading ...)&#xA;&#xA;So this timeout would work for small files, but not for download of a large file. We will see how we can have a variable timeout for the body later in the post.&#xA;&#xA;Server&#xA;&#xA;For the server we have to set two timeouts in the easy setup. Read and write. So the ReadTimeout defines how long you allow a connection to be open during a client sends data. And with WriteTimeout it is in the other direction. (Yeah it could also be, that you send data somewhere and the packages never get accepted TCP-ACK and your server would starve again)&#xA;&#xA;s := &amp;http.Server{&#xA;&#x9;ReadTimeout: 1  time.Second,&#xA;&#x9;WriteTimeout: 10  time.Second,&#xA;&#x9;Addr:&#34;:8080&#34;,&#xA;}&#xA;s.ListenAndServe()&#xA;&#xA;So this server would listen on port 8080 and have your desired timeouts.&#xA;&#xA;For a lot of use cases, this easy path may be enough. But please read on and see what other things are possible :D&#xA;&#xA;[Client] In-depth configuration of  timeouts&#xA;&#xA;One thing to note before we get started here is the following differentiation: &#xA;&#xA;Easy path timeout (above) is defined for a complete request including redirects&#xA;The following configurations are per connection.(As they are defined via http.Transport, which has no information about redirects itself) So if there happen a lot of redirects, the timeouts add up per connection. You can use both, to prevent endless redirects&#xA;&#xA;Connection setup&#xA;&#xA;In the following setup are two parameters, we set with a timeout. They differ in their connection type:&#xA;&#xA;DialContext: Defines the setup timeout for an unencrypted HTTP connection&#xA;TLSHandshakeTimeout: Cares about the setup timeout for upgrading the unencrypted connection to an encryped one HTTPS&#xA;&#xA;In a 2019 setup, you should always try to talk to encrypted HTTPS endpoints, so there are very rare cases where it makes sense to only set one of the two parameters.&#xA;&#xA; c := &amp;http.Client{&#xA;    Transport: &amp;http.Transport{&#xA;&#x9;&#x9;DialContext:(&amp;net.Dialer{&#xA;&#x9;&#x9;&#x9;Timeout:   3  time.Second,&#xA;&#x9;&#x9;}).DialContext,&#xA;&#x9;&#x9;TLSHandshakeTimeout:   10  time.Second,&#xA;    }&#xA;}&#xA;c.Get(&#34;https://blog.simon-frey.eu/&#34;)&#xA;&#xA;With setting these parameters you define how long the setup of a connection should last at longest. This helps you with &#39;detecting&#39; (for actually detection you have to do more than this few lines) of hosts that are down in a faster manner. So you are not waiting in your project for a host, that is/was down in the first place.&#xA;&#xA;Response headers&#xA;&#xA;Now as we have an established (hopefully HTTPS) connection, we have to receive the meta information about the content we get. This meta information is stored in the headers. We can set timeouts, how long we want the server to be able to answer us. &#xA;&#xA;Here again are two different timeouts to be defined:&#xA;&#xA;ExpectContinueTimeout:  This configures how long you want to wait after you send your payload for the beginning of an answer (in form of the beginning of the header)&#xA;ResponseHeaderTimeout:  And with this parameter you set how long the complete transfer of the header is allowed to last &#xA;&#xA;So you want to have the complete header information ExpectContinueTimeout + ResponseHeaderTimeout after your did send you complete request&#xA;&#xA;c := &amp;http.Client{&#xA;&#x9;Transport: &amp;http.Transport{&#xA;&#x9;&#x9;ExpectContinueTimeout: 4  time.Second,&#xA;&#x9;&#x9;ResponseHeaderTimeout: 10  time.Second,&#xA;    },&#xA;}&#xA;c.Get(&#34;https://blog.simon-frey.eu/&#34;)&#xA;&#xA;With setting this parameters, we can define how long we accept the server to take for an answer and therefore also for internal operations. &#xA;&#xA;Imagine following scenario: Your access an API, that will resize an image you send to it. So you upload the image and normally it takes ~1 second to resize the image and than start sending it back to your service. But maybe the API crashes of whatever reasons and takes 60 seconds to resize the image. As you now defined the timeouts, you can abort after a couple of seconds and tell your own customers that API xyz is down and that you are in contact with the supplier...better than having your fancy image editor loading for ages and not showing any status information, and that all because of a bug that is not even your fault!&#xA;&#xA;Body&#xA;&#xA;Per definition, the timeout for the body is the hardest, as this is the part of the response that will vary the most in size and thereby time it needs for transfer. &#xA;&#xA;We will cover two approaches that help you to define a timeout on the body:&#xA;&#xA;Static timeout, that kills the transfer after a certain amount of time&#xA;Variable timeout, that kills the timeout after there was no data transfered for a certain amount of time&#xA;&#xA;Static timeout&#xA;&#xA;We are dropping all errors in the example code. You should not do that!&#xA;&#xA;c := &amp;http.Client{}&#xA;resp,  := c.GET(&#34;https://blog.simon-frey.eu&#34;)&#xA;defer resp.Body.Close()&#xA;&#xA;time.AfterFunc(5time.Second, func() {&#xA;&#x9;resp.Body.Close()&#xA;})&#xA;bodyBytes, := ioutil.ReadAll(resp.Body)&#x9;&#xA;&#xA;In the code example we set a timer, that executes resp.Body.Close() after it finished. With this command we close the body and the ioutil.ReadAll will throw a read on closed response body error.&#xA;&#xA;Variable timeout&#xA;&#xA;We are dropping most of the errors in the example code. You should not do that!&#xA;&#xA;c := &amp;http.Client{}&#xA;resp,  := c.GET(https://blog.simon-frey.eu&#34;)&#xA;defer resp.Body.Close()&#xA;&#xA;timer := time.AfterFunc(5time.Second, func() {&#xA;&#x9;resp.Body.Close()&#xA;})&#x9;&#xA;                 &#xA;bodyBytes := make([]byte, 0)&#xA;for {&#xA;&#x9;//We reset the timer, for the variable time&#xA;&#x9;timer.Reset(1  time.Second)&#xA;&#xA;&#x9;, err = io.CopyN(bytes.NewBuffer(bodyBytes), resp.Body, 256)&#xA;&#x9;if err == io.EOF {&#xA;&#x9;&#x9;// This is not an error in the common sense&#xA;        // io.EOF tells us, that we did read the complete body&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;} else if err != nil {&#xA;&#x9;&#x9;//You should do error handling here&#xA;        break&#xA;&#x9;}&#xA;}&#xA;&#xA;The difference here is, that we have a endless loop, that iterates over the body and copies data out of it. There are two options how this loop will be left:&#xA;&#xA;We get the io.EOF file error from io.CopyN, this means we read the complete body and no timeout neds to be triggered&#xA;We get another error, if that error is the read on closed response body error the timeout triggered.&#xA;&#xA;This solutions works, because io.CopyN is blocking. So if there is not enough (in our case 256 bytes) to read from the body it will wait. If the timeout triggers during that time, we stop the execution.&#xA;&#xA;My &#39;default&#39; config&#xA;&#xA;Again: This is my very own opinion on the timeouts and you should adapt them to the requirements of your project! I do not use this exact same setup in every project!&#xA;&#xA;c := &amp;http.Client{&#xA;&#x9;Transport: &amp;http.Transport{&#xA;&#x9;&#x9;DialContext:(&amp;net.Dialer{&#xA;&#x9;&#x9;&#x9;Timeout:   10  time.Second,&#xA;&#x9;&#x9;&#x9;KeepAlive: 10  time.Second,&#xA;&#x9;&#x9;}).DialContext,&#xA;&#x9;&#x9;TLSHandshakeTimeout:   10  time.Second,&#xA;           &#xA;&#x9;&#x9;ExpectContinueTimeout: 4  time.Second,&#xA;&#x9;&#x9;ResponseHeaderTimeout: 3  time.Second,&#xA;&#x9;&#x9;&#xA;        // Prevent endless redirects&#xA;        Timeout: 10  time.Minute,&#xA;&#x9;},&#xA;}&#xA;&#xA;[Server] In-depth configuration of  timeouts&#xA;&#xA;As there are no certain dial up timeouts for http.Server we will directly start into the timeouts for the headers.&#xA;&#xA;Headers&#xA;&#xA;For the request headers we have a certain timeout: ReadeHeaderTimeout, which represents the time until the full request header (send by a client) should be read. So if a client takes longer to send the headers the connection will time out. This timeout is especially important against attacks like SLOWLORIS as here the header never gets closed and the connection thereby will be kept open all the time.&#xA;&#xA;s := &amp;http.Server{&#xA;&#x9;ReadHeaderTimeout:20 time.Second,&#xA;}&#xA;s.ListenAndServe()&#xA;&#xA;As you may already have recognized, there is only a ReadHeaderTimeout, because for the sending of data to the client go does not have a certain distinction between the headers and the body for the timeout&#xA;&#xA;Body&#xA;&#xA;Here we have to differentiate between request (that is send from the client to the server) and the response body. &#xA;&#xA;Response body&#xA;&#xA;For the response body there is only one static solution for a timeout:&#xA;&#xA;s := &amp;http.Server{&#xA;&#x9;WriteTimeout:20 time.Second,&#xA;}&#xA;s.ListenAndServe()&#xA;&#xA;As long as the connection is open, we can not differentiate if the data was send correctly or if the client is doing bogus here. But as we know our payload data, it is quite straight forward to set the timeout here on our past information we have about our server. So if you are a file server this timeout should be longer than for a API server. You can set no timeout for testing purpose and track how long a &#39;normal&#39; request takes. Add a few percent of variance there and then you should be good to go!&#xA;&#xA;Request body&#xA;&#xA;Attention: If you did set the WriteTimeout it will have an effect on the request timeout as well. This is because of the defintion if the WriteTimeout. It starts when the headers of the request where read.  So if reading from the request body takes 5 seconds and your write timeout is 4 seconds it will also kill the reading of the request body!&#xA;&#xA;For the request body there are again two possible solutions:&#xA;&#xA;Static timeouts that we can define via the http.Client config&#xA;Variable timeouts for that we have to build our own code workaround (as there is currently no support for that)&#xA;&#xA;Static&#xA;&#xA;For a static timeout we can use the ReadTimeout parameter we already used in the easy path:&#xA;&#xA;s := &amp;http.Server{&#xA;&#x9;ReadTimeout:20  time.Second,&#xA;}&#xA;s.ListenAndServe()&#xA;&#xA;Variable&#xA;&#xA;For the variable timeout we need to work on the level of the handlers. Do not set a ReadTimeout, because the static timeout will interfere with the variable one. Also you must not set WriteTimeout as it is counted from the end of the request header and thereby also will interfere with the variable header&#xA;&#xA;We have to define our own handler for the server, in our example we call it timeoutHandler. This handler does nothing than reading from the body with our loop and timeout if there is no data send anymore.&#xA;&#xA;type timeoutHandler struct{}&#xA;func (h timeoutHandler) ServeHTTP(w http.ResponseWriter, r http.Request){&#xA;&#x9;defer r.Body.Close()&#xA;&#xA;&#x9;timer := time.AfterFunc(5time.Second, func() {&#xA;&#x9;&#x9;r.Body.Close()&#xA;&#x9;})&#xA;&#xA;&#x9;bodyBytes := make([]byte, 0)&#xA;&#x9;for {&#xA;&#x9;&#x9;//We reset the timer, for the variable time&#xA;&#x9;&#x9;timer.Reset(1  time.Second)&#xA;        &#xA;&#x9;&#x9;, err := io.CopyN(bytes.NewBuffer(bodyBytes), r.Body, 256)&#xA;&#x9;&#x9;if err == io.EOF {&#xA;&#x9;&#x9;&#x9;// This is not an error in the common sense&#xA;&#x9;&#x9;&#x9;// io.EOF tells us, that we did read the complete body&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;&#x9;} else if err != nil {&#xA;&#x9;&#x9;&#x9;//You should do error handling here&#xA;&#x9;&#x9;&#x9;break&#xA;&#x9;&#x9;}&#xA;&#x9;}&#xA;}&#xA;&#xA;func main() {&#xA;&#x9;h := timeoutHandler{}&#xA;&#x9;s := &amp;http.Server{&#xA;&#x9;&#x9;ReadHeaderTimeout:20 time.Second,&#xA;&#x9;&#x9;Handler:h,&#xA;&#x9;&#x9;Addr:&#34;:8080&#34;,&#xA;&#x9;}&#xA;&#x9;s.ListenAndServe()&#xA;}&#xA;&#xA;It is a similar approach to the one we did choose in the client. You have define this timeout loop in every handler you have separately. So you maybe should consider building a function for that, so that you don&#39;t have to rewrite the coder over and over again.&#xA;&#xA;My &#39;default&#39; config&#xA;&#xA;Again: This is my very own opinion on the timeouts and you should adapt them to the requirements of your project! I do not use this exact same setup in every project!&#xA;&#xA;s := &amp;http.Server{&#xA;&#x9;ReadHeaderTimeout:20 time.Second,&#xA;&#x9;ReadTimeout: 1  time.Minute,&#xA;&#xA;    WriteTimeout: 2  time.Minute,&#xA;}&#xA;&#xA;Conclusion&#xA;&#xA;I hope you liked this blog post and it helped you to understand the different timeouts in go a little bit better. If you have any feedback, questions or just want to say &#39;Servus&#39; (bavarian german for hello) do not hesitate to contact me!&#xA;&#xA;Feedback:  Email &#xA;&#xA;Donate: Donate Button or Patreon&#xA;&#xA;RSS Feed - This work is licensed under Creative Commons Attribution 4.0 International License&#xA;&#xA;------&#xA;&#xA;Sources&#xA;&#xA;https://golang.org/pkg/net/http/&#xA;&#xA;https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779&#xA;&#xA;https://blog.cloudflare.com/exposing-go-on-the-internet/ &#xA;&#xA;Gopher Image (CC BY-SA 3.0): Wikimedia]]&gt;</description>
      <content:encoded><![CDATA[<p>or a less click baity title: <strong>An introduction to net/http timeouts</strong></p>

<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Gophercolor.jpg/800px-Gophercolor.jpg" alt="Source: https://commons.wikimedia.org/wiki/File:Gophercolor.jpg"/></p>

<hr/>

<p>First of all, as you may already recognized from the titles, this blogpost is standing on the shoulder of giants. The following two blog posts inspired me to revise the net/http timeouts, as the linked blog post are at some parts outdated:</p>
<ul><li><a href="https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/" rel="nofollow">The complete guide to golang net/http timeouts</a></li>
<li><a href="https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779" rel="nofollow">Don&#39;t use Go&#39;s default http client</a></li></ul>

<p>Give them a visit after  you read this post and see how things have changed in such a short time ;)</p>

<hr/>

<h2 id="why-you-should-not-use-the-standard-net-http-config" id="why-you-should-not-use-the-standard-net-http-config">Why you should not use the standard net/http config?</h2>

<p>The go core team decided to not set any timeouts at all on the standard net/http client or server config and that is a real sane decision. Why?</p>

<p>To not break things! Timeouts are a highly individual setting and in more of the cases a to short timeout will break your application with a unexplainable error, than a too long one (or in GOs case none) would.</p>

<p>Imaging following different use cases of the go net/http client:</p>

<p>1) Downloading a big file (10GB) from a webserver. With an average (german) internet connection this would take round about five minutes.</p>

<p>=&gt; The timeout for the connection should be longer than five minutes, because anything less would break your application by canceling the download in the middle (or third, or whatever percentage) of the file.</p>

<p>2) Accessing a REST API with a lot of concurrent connections. This normally should take at most a few seconds per connection</p>

<p>=&gt; The timeout should be not more than 10 seconds, as anything that takes longer would mean, that you are keeping that connection open for to long and starving your application as it only can have X (depending on system, configuration and coding) open connections. So if that REST API you access is broken in any way that it keeps the connections open without sending you the data you need, you want to prevent it from doing so.</p>

<p>So, for what scenario should the standard lib be optimized? Trust me, you do not want to decide that for millions of developers around the globe.</p>

<p>That is why we have to set the timeouts, so that they fit our use case!</p>

<p><strong>So never use the standard go http client/server! It will break your production system!</strong> <em>(Happened to me, as I forgot my own rule ones)</em></p>

<h2 id="what-type-of-timeouts-occur-in-a-http-connection" id="what-type-of-timeouts-occur-in-a-http-connection">What type of timeouts occur in a HTTP connection?</h2>

<p>I assume you have a basic understanding of the <a href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol" rel="nofollow">TCP</a> and <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" rel="nofollow">HTTP</a> protocols. (If not, Wikipedia is a good starting point for that)</p>

<p>There are mainly three different categories of timeouts that can occur:</p>
<ul><li>During connection setup</li>
<li>During receiving/sending the header information</li>
<li>During receiving/sending the body</li></ul>

<p>As you already might expect from our two examples in the introduction, the timeout that we have to care about the most is the one regarding the body. The other ones are most of the time shorter and similar in every setup. (E.g. there is only a certain amount of headers that will be send) We still have to think and care about timeouts in the header as there are certain DOS attacks that play with malformed headers, or never closing a header (<a href="https://www.slashroot.in/slowloris-http-dosdenial-serviceattack-and-prevention" rel="nofollow">SLOWLORIS DOS attack</a>) but we will come to this in a later point of the post.</p>

<h2 id="you-should-at-least-do-this-the-easy-path" id="you-should-at-least-do-this-the-easy-path">You should at least do this: The easy path</h2>

<p>net/http gives you the possibility to set a timeout for the complete transfer of data (setup, headers, body). It is not as fine  grained as with the later bespoken solutions, but it will help you to prevent the most obvious problems:</p>
<ul><li>Connection starving</li>
<li>Malformed header attacks</li></ul>

<p><strong>So you should at least use this timeouts on every go net/http client/server you use!</strong></p>

<h3 id="client" id="client">Client</h3>

<p>The following example client, gives you a complete timeout of 5 seconds.</p>

<pre><code class="language-go">c := &amp;http.Client{
	Timeout: 5 * time.Second,
}
c.Get(&#34;https://blog.simon-frey.eu/&#34;)
</code></pre>

<p>If the connection is still open, it will be canceled with <code>net/http: request canceled (Client.Timeout exceeded while reading ...)</code></p>

<p>So this timeout would work for small files, but not for download of a large file. We will see how we can have a variable timeout for the body later in the post.</p>

<h3 id="server" id="server">Server</h3>

<p>For the server we have to set two timeouts in the easy setup. Read and write. So the <code>ReadTimeout</code> defines how long you allow a connection to be open during a client sends data. And with <code>WriteTimeout</code> it is in the other direction. (Yeah it could also be, that you send data somewhere and the packages never get accepted <code>TCP-ACK</code> and your server would starve again)</p>

<pre><code class="language-go">s := &amp;http.Server{
	ReadTimeout: 1 * time.Second,
	WriteTimeout: 10 * time.Second,
	Addr:&#34;:8080&#34;,
}
s.ListenAndServe()
</code></pre>

<p>So this server would listen on port <code>8080</code> and have your desired timeouts.</p>

<p><strong>For a lot of use cases, this easy path may be enough. But please read on and see what other things are possible :D</strong></p>

<h2 id="client-in-depth-configuration-of-timeouts" id="client-in-depth-configuration-of-timeouts">[Client] In-depth configuration of  timeouts</h2>

<p>One thing to note before we get started here is the following differentiation:</p>
<ul><li>Easy path timeout <em>(above)</em> is defined for a complete request including redirects</li>
<li>The following configurations are per connection.(As they are defined via <code>http.Transport</code>, which has no information about redirects itself) So if there happen a lot of redirects, the timeouts add up per connection. You can use both, to prevent endless redirects</li></ul>

<h3 id="connection-setup" id="connection-setup">Connection setup</h3>

<p>In the following setup are two parameters, we set wi