Simon Frey

Writing about tec, podcasting and other thoughts that come to my mind.

golang gopher benchmark wednesday

Known length slice initialization speed – Golang Benchmark Wednesday

I stumbled over the hint, that it is better for performance if you initialize your slices with a dedicated length and capacity if you know it. Sounds as it would make sense, but I wouldn't be me if I just accept that without testing that hypothesis.

An example that I am using in real life is for creating a slice of ids for querying a database later on with that ids. Iterating over the original data structure (in my case a 'map[string]SimonsStruct{Id int, MORE FIELDS}') and copying the ids out.

Normally I used 'make([]int,0)' (len == 0 & cap == 0), so let's see if that would be faster with initializing the slice directly with it the right capacity and length.

Keep in mind the tests only work if you know the size of the target slice upfront. If not, sadly this Benchmark Tuesday will not help you.

Benchmark Code

Bad: Initialize slice empty even if you know the target size

const size int //See size values benchmarked later in table
func BenchmarkEmptyInit(b *testing.B) {
	for n := 0; n < b.N; n++ {
		data := make([]int,0)
		for k:=0;k<size;k++{
			data = append(data,k)
		}
	}
}

Best for big size: Initialize slice with known capacity and add data with append

const size int //See size values benchmarked later in table
func BenchmarkKnownAppend(b *testing.B) {
	for n := 0; n < b.N; n++ {
		data := make([]int,0,size)
		for k:=0;k<size;k++{
			data = append(data,k)
		}
	}
}

Best for small & medium size: Initialize slice with known capacity & length and add data with direct access

const size int //See size values benchmarked later in table
func BenchmarkKnownDirectAccess(b *testing.B) {
	for n := 0; n < b.N; n++ {
		data := make([]int,size,size)
		for k:=0;k<size;k++{
			data[k] = k
		}
	}
}

Results

The table shows the time it took for every example to init all its elements (only measured inside the benchmark loop) Have an eye on the unit! (ns/ms/s)

#Elements (size) EmptyInit KnownAppend KnownDirectAccess
1 31.00 ns 1.52 ns 0.72 ns
100 852 ns 81.4 ns 59.1 ns
100 000 1.11 ms 0.22 ms 0.20 ms
1 000 000 10.76 ms 3.13 ms 3.14 ms
100 000 000 2.48 s 0.21 s 0.22 s
300 000 000 6.79 s 0.90 s 0.95 s

Interpretation

That initializing the slice with len & capacity 0 would be the worst was obvious, but I am still surprised that the append approach outperforms the direct access for bigger sizes.

But after tinkering about it total makes sense. The direct access approach needs to write every entry twice:

1) Initializing the whole array with its 'nil' value (in our case int with '0') 2) Writing the actual value into that slice

Step 1) is not needed with the append approach, as we just reserve a memory location but the previous values stay there until we write them in step 2. For bigger slices this setup overhead outweighs the performance benefit of direct access. This will be even more significant if the values in the slice are not only simple int but even bigger (e.g. struct with a lot of fields), as then the setup will have to initialize even more 'nil' values.

Conclusion

The hint I found only was right: If you know the size of your target slice always initialize it with that size as capacity. For medium & small size slices use the direct access approach. For very big slices use append.

Thanks for reading and see you next week!

You got any feedback? Would love to answer it on HackerNews

p.S. There is a RSS Feed

Microsoft Azure Anger

Last weekend I attended a Hackathon at Microsoft. Overall it was an awesome experience and I had a lot of fun, so this post has nothing to do with the event itself and neither does it reflect my overall opinion on Microsoft. They do awesome stuff in a lot of fields, but with Azure, they are definitely underdelivering.

During the event, I started to get in contact with the Azure platform. Our project idea was to create a website where you can search for news and then via sentiment analysis this news would be sorted by “happiness”. The news search and sentiment analysis are offered via Azures so-called cognitive services that abstract the ML models away and you let you simply use an API for accessing those services....so far so good. With this premise most of you coders out there will have the thought: “This sounds too easy to fill 24h of programming”. Exactly what I thought...and was already thinking about also coding an Alexa skill and so on to fill the time. With two experienced developers, we thought the backend would be done in about 4h (conservative calculation) as it would only be stitching together three APIs and delivering that info to a JSON REST API for our frontend team. For keeping the fun up and having more learnings during the project we decided to do the backend as a serverless function. But then Azure got into our way...

In the end, it took us ~9h to develop the backend as a serverless function consisting of mainly of a 40 line JavaScript file we had to develop in the in-browser “editor” that Azure offers as all the other approaches we tried didn't work out and we ended up abandoning them. Once again: 9 hours for 40 lines of JS code stitching together three APIs...that is insane. (Btw at 3 am we decided to switch to GCP (google cloud platform) and that did the job in about 45 minutes)

So for sure we did things wrong and it could have been done faster, but this blog post is about the hard onboarding and overall bad structure of Azure. Please also keep in mind that Azure is still in a more-or-less early stage and not all of it is broken. In the following, I will walk you through the timeline of this disaster and suggestions I would have in mind to fix some of the most confusing steps. Actually, I will try to avoid these mistakes in my own future projects, so thanks Microsoft by showing me a way how not to do things xD

Just a bit more background: My partner in the backend had some experience with GCP and I do most of my current projects with AWS, so we did know how things work there...couldn't be too hard to transfer that knowledge to the Azure platform.

Start of the project

So first of all creating a new Azure account, that is not that hard and after entering credit card info you get 100$ of free credit. I actually like how Microsoft solved that here: You have two plans. You start with the 100$ free tier and if you spend all of that money you manually have to change to the pay-as-you-go plan. So that protects you of opening up an account, doing some testing, forgetting about it and then a month later you get a huge bill (happened to me with AWS). So that is nice for protecting new users that just start to test the system. Good job here Microsoft!

After setting up the account I created a new project and added some of the resources we needed. Creating a serverless function I recognized the tag “(Preview)” on the function I created but didn't think more about it...but actually, that sign should be something like Experimental/Do not use/Will most likely not work properly. We created a Python serverless function (apparently Python functions are still beta there) and tried to get some code in there.

There are three ways to get code into an azure function:

  • Web “editor”
  • Azure CLI
  • VS Code

...for full-featured functions. As we selected the experimental/beta/preview functionality Python we only had the latter two options. Not that bad as it is the same for AWS and I am used to deploying my code via the AWS cmd...shouldn't be way harder with Azure.

My suggesting: Do not do publish functionality that is obviously not ready yet. Do internal testing instead of using your users for that task.

Azure plugins for VS code

Microsoft overs a wide range of VS code plugins for Azure. As that is my main editor anyways I wanted to give them a try. So for the functionality of serverless functions, you need the functions plugin and about 9 other mandatory ones that are some sort of base plugins. 50MB and three VS Code crashes later the required plugins were finally installed properly. The recommended login method did not work and I had to choose the method of authenticating via the browser instead. Not that big of a deal, but as they recommend the inline method one would think that should work. (Didn't work for the other folks in my team either...so it had nothing to do with my particular machine)

You would think that 500MB should be enough for finally being able to deploy some code...but you still need 200MB more for the Azure cli that is required for the plugins to work properly.

Finally having installed all of it you can see all your Azure functions and resources in VS code. I started to get a bit excited as it looked like from now on the development would be straight forward and easier as I am used to from AWS.

But that 700mb of code did not work properly....the most important function “deploy” failed without any detailed error message...AAAAAAARRRG. Why do I have to install all that crap and then it can't do the most basic task it has to do: get my code into their cloud.

Keep your tooling modular and try to do fewer things, but do them right

Code templates

A nice idea is that on creating a new serverless function Azure greets you with a basic boilerplate code example showing you how to handle the basic data interfaces.

It might have been because we selected the alpha functionality “Python”, that we didn't actually get Python code here but JavaScript. So your function is prepopulated with code that is not able to run because it is the wrong programming language. We were lucky and recognized that right away, but you could get really confusing error messages here if you then start developing in JS but actually having a Python runtime.

Better no boilerplate code than one in the wrong programming language

But at least it is colorful

So next try with the Azure CLI. The first thing that you recognize is that the CLI has all sorts of different colors...but that does not help if you are annoyed and want to get things done.

That is a thing you also see in the Azure web interface...it has got quite a few UX issues but they do have over five color themes that you can choose from for styling the UI...Microsoft I'm not sure if you set your priorities right here ;)

Also, the CLI did not get us where we wanted....either due to our own incompetence or due to the CLI itself, no clue. Either way, I would blame Azure as it is their job to help developers onboard and at least get basic tasks (we still only want to deploy a simple “hello world”) done in an acceptable time.

Focus less on making your UI shine in every color of the rainbow and try to improve documentation and onboarding examples

Full ownership of a resource still does not give you full privileges

After finally being able to deploy at least the “hello world” we wanted to go a step further...work concurrently on that project. Yes until now we mainly did pair programming on a single machine.

As I was the owner of that resource I also wanted to give my teammate full access to it, so that he could work on the resource and add functions if required. I granted him “owner” access rights (the highest that were available) but he was still not able to work properly with that function. In the web UI it did work more or less but than again in VS code there's no chance to do anything (adding a function or deploying it). I ended up doing something that goes against everything I learned about security: I logged in with my credentials on his machine.

So imagine yourself now already sitting in front of your laptop for about 4 ½ hours and you did not manage to do any of the actual work you set out to do.

Ditching Azure Functions and switching to GCP

That was the moment when we ditched the idea of doing the backend as an Azure function. We switched to GCP where we started all over again. As I've never worked with that platform either I expected a similar hard start, as I already had in the last few hours with Azure. But then about 25 minutes later we achieved more on GCP than with Azure until then.

Something both Azure and GCP do better than AWS is that they have the logs of a serverless function in the same window as the function itself. AWS has a different approach here and you have to change to the cloud logs when you want to get info about your function and how it worked. Props to both Google and Microsoft for solving this a lot better!

Actually a hint for AWS: Give your user all controls and info at a single place

Cognitive services

The prices you could win at the Hackathon were attached to using Azure and thereby we stuck to the cognitive services for doing the news search and the sentiment analysis. Overall the API is straight forward: Send your data and get the results back.

One thing we got told in a presentation and that you should keep in mind when using the cognitive services: You do not control the model and it could change at any moment in time. So if you use the cognitive services for productive use, you should continuously check that the API didn't change its behavior in a way that influences your product in a bad way. But most of the time it is still a lot cheaper and better than building the model yourself

The problem that we did have with the services were again authentication issues. Quite confusing some of the cognitive services (e.g. the sentiment analysis) have different API base URLs depending on where you register that cognitive service and others do not. As I assume they need that manual setting of data centers for a particular (unknown to me) reason. Indeed I would propose to have all the cognitive services bound to a location.

The news search, for example, is not bound to a location and so we had two different behaviors of the API base URLs in our so short and easy application:

  • One URL for all locations.
  • Only a certain location is valid for your resource. If you point to a wrong API location you get an “unauthorized” as the response

Pointing to the wrong location is pure incompetence on the developer side but it would help a lot if there would be a distinct error code/message for that scenario.

Have the same base URL behavior for all cognitive services

Return some sort of 'wrong location'-error if you have a valid API token but you are pointing to the wrong location

Insufficiently documented SDKs

Azure offers SDKs for using their services. We gave the JS SDK for the cognitive services a try. Here we had both ups and downs: First, props to the developers coding the SDKs, as they are straight forward and do what they should. Even the code itself looks good...but why the hell do I have to look into the code of the SDKs to get all the options the functions offer? When you stick to the documentation provided via the GitHub readme or NPM you only get a fraction of the functionality. We were confused that Microsoft's own SKDs seemed not to be API complete. Looking into the code we saw they are actually API complete and do offer a lot more options than documented.

Please Microsoft: Properly document your functionalities!

IMO there must be deep problems with the internal release processes at Azure. It is not acceptable that an IT company that's been in the industry for so long allows itself such a basic mistake. You should not release your products (and I see the SDKs as such) without proper documentation.

“Code Examples”

During our trial and error period of trying to get the JS SDK running, we stumbled upon the quickstart guide for the cognitive services Quickstart: Analyze a remote image using the REST API with Node.js in Computer Vision

Instead of using their own SDK and explaining how to use it they show you how to manually build an HTTP request in JS. Sure that can be helpful for new JS coders, but if you have an SDK for that particular reason...why are you not using it? Looks like the left hand is not knowing what the right-hand does.

Stick to one way of doing things. If you have an SKD, also use it in your quickstart guides for being consistent

Conclusion

In the end, we did port the code back from GCP to an Azure function (again ~1h of work). We selected JS instead of Python and coded completely in the web UI...that did work. I now know how real Microsoft business developers do their daily business...never leave the web UI and just accept that life is hard.

Microsoft failed to deliver a adequate experience here and lost me as a potential customer. How can it be that I was able to do the same things in a fraction of the time in GCP? (And keep in mind: it was already 3am in the morning, I was super tired and I also never worked with GCP before)

None of the three major players are perfect and sure I understand it is hard to deliver fast and keeping good quality in this highly competitive market. But maybe actually going the step further will help to win in the end.

Once again: This is me only rating the onboarding experience of Azure in particular! No general opinion on Microsoft.

Last one: The Azure web UI didn't work in Chrome. So if you have issues with that, Firefox did the trick for us ;)

Women in front of laptop

Why every SaaS company should reevaluate their live chat strategy in 2019

What would you say, if I tell you that you leave over 70% off your potential customers on the table, because you do not have a live chat on your website? FurstPerson discovered exactly that: 77% of your customers are very unlikely to make a purchase if we do not offer a live chat. Wow! That huge number should be enough to convince everyone to directly search for a live chat solution that gets that customers back.

The technical setup of a website live chat is the easy part. There are other points you have to think about and I hope that this article helps you to reevaluate your live chat strategy.

Customers demand live chat support

But email support worked now for a decade, why are customers becoming so demanding for a live chat support channel?

We are living in a time where we can get everything we want in no time. Do you sometimes find yourself being annoyed that you have to wait for the next day until your amazon delivery is at your door? Or when your spouse does not reply within minutes? Customers are as impatient as you are yourself!

When your (potential) customers have a question, they want to have it answered in a few minutes instead of waiting for a day until their email tickets are finally read. In 2019 customers do not excuse a slow support channel anymore!

So if competitors are able to answer the customers questions faster than you are, they will outperform you in sales and overall customer satisfaction.

We as entrepreneurs have to adapt to this new customer requirement to stay in the game! Every site should have a live chat possibility

A bad experience is worse than no live chat at all

Only one thing that is hurting a business more than no live chat is a bad live chat experience. You are not done with just putting a widget on your page and then configuring it to send you an email. I hate it if I open the live chat window, type in my question and then after 1 minute a bot tells me that the team is away and that I please should use some weird email form. Why the hell is there a live chat window at all if no one is answering in less than 2 minutes? If you use your chat that way, please send you customers directly to the email form and tell them how long they will have to wait on average for a response. You are than not 100% in line with the modern live chat situation, but it is still way better than after all having an email form that looks like a live chat window!

If your live chat is just another design for your email form, please do not use that live chat at all

Live chat support needs (wo-)manpower

Bots, Artificial Intelligence and Machine Learning are very nice for live chat company in their sales pitch but after all you always need a human being on the other side of your customer live chat. Technology will help you to a certain point, but the biggest value you create is your customer feeling appreciated. You show, that you and your company do care so much for them that there is always a human being happy to help with all their issues.

Did you ever have the situation a friend of yours told you about a company that helped super fast with an issue? I'm 100% sure that friend still is a customer at that company. We humans want to feel valued and if someone does that, we will stick to that person/company.

Value your customers with human support agents instead of heartless bots. This investment will definitely pay back!

Your live chat is your most honest feedback channel

Compared to dedicated customer services and feedback forms your live chat can and will be your most honest feedback channel. You will experience the problems your customer have with the product in the second they stumble upon it. And yes...sometimes just reading an FAQ or so would have helped your customer to circumvent that problem, but that is not how customers function. They want your product to make their life easier, and they will not work trough extensive manuals to understand how to use your product.

If you get the same questions over and over again, you definitely should think about changing your product at that particular point. And the best of all: You can just ask your customer during the live chat session what would help them to circumvent that problem in the future. They will feel valued and you get a customer survey for free ;)

Live chat helps you to understand just in time what problems your customer stumble upon. Use that feedback for improving your product

With great power comes great responsibility

One thing at the end. There are awesome features for pull marketing within some extensive live chat solutions, but please try to use them wisely. A lot of your potential customer will run away from your website screaming if you bombard them with popups and windows: “Here is our newsletter”, “Get 20% off”, “Start a live chat with us”

You can use certain triggers if you experience your customer stuck somewhere. Maybe they are hovering for 20 seconds over the pricing page or are extensively scrolling up and down on your page. Then it is a good point to offer them live chat support by automatically opening the live chat window. Just that they are on your site for 2 seconds is no valid reason!

Try to only automatically open the chat window if your customer seems stuck. Please do not annoy them with useless popups. They will find the live chat in the bottom right corner when they need it

Live chat solution for solopreneurs

Full disclosure: I am the co-founder of gramchat

With gramchat we tried to solve the bespoken issues and create a live chat solution that helps you to serve your customers best. Gramchat directly send the customer messages to your Telegram Messenger and from there you can answer them directly – no extra app required. Gym, beer with friends or during your day-job, help your customers where ever you are.

With the Telegram integration we try to solve the “live chat is just an email form”-problem. With gramchat you are able to answer your customer within the important first 90 seconds.

But enough advertisement! There are different great live chat solutions out there and you should pick the one that suits you the best. For a small team or as solopreneur, gramchat may be your perfect fit :D

I would love if you give it a try => gramchat.me

Wish you all an awesome time! Simon

No wifi on laptop image

As you might imagine from the title, I am at the moment of writing this article sitting in a train from Berlin to Hamburg. For those of you who have never been in Germany...we do have WIFI on the trains here, but contrary to what you might expect it is really bad. (And if it is sometimes fast enough, you get only 200mb of traffic <= Thanks to mac address randomization that can be bypassed)

Wait, what? Bad WIFI on trains in the first world industry nation Germany? Yes, even during my travel on a train in Thailand I had way better WIFI than I ever experienced in the German trains. There are two main factors for that:

  • Bad mobile network overall...if you leave the bigger cities you most of the time do not even have Edge (yes kids, slower than 2G) or a mobile network connection at all. So sad!
  • Cheap hardware in the trains. Actually the modems in the trains are standard 3G modems you may also purchase as mobile hot-spot device. Sure they are a bit more powerful, but they are not made for this special use case: Connection to new base stations in at a high ratio. It actually is a quite hard technical challenge to have a modem do this on a high speed train. But we have 2019 ...thinking about sending people to mars...and as we can see in other countries this problem is apparently solved. Maybe some more money would be good invested here.

But enough ranting about the WIFI in here (that is BTW current non existent)

OK sorry one more thought: Looking around me I see a lot of people in nice suites working on there laptops. Imagine them earning 60€/hour and they need double the time for a task, because the WIFI is so weak. Assuming there are 100 (conservative calculation) of such people on a train. So during this single trip from Berlin to Hamburg (2h) there is 60€ * 100 * 2 = 12 000€ of wasted human capital....better not tell that any company paying their employees the train ride and the “work time” during this trip.

Actually this article is about tech

I experience this not the first time, but why am I triggered this time that much, that I decided to write a blog post about this topic? As web developer I am currently working on a live chat project (gramchat.me – please be kind, the landing page would be finished if I actually could work here) where I wanted to finish the landing page & documentation during this trip.

Now I experience myself sitting here and my laptop, normally the device paying my rent, is not more than a dump black box....close to every work flow I have does requires the Internet, I can't work off-line. grrrrr

How could that happen? Normally I am always at places with good WIFI or mobile network (Berlin Big City Life) and so some bad habits sneaked in:

  • Development work
    • Google fonts
    • Payment gateway that needs to be configured
    • Documentation (How could anyone write software before stackoverflow?)
    • Package tools for just in time downloading of dependencies
    • Github Issues and Board for organization
    • Backend infrastructure is build on AWS lambda (can't test that offline)
  • Entertainment
    • Movies are on netflix
    • Music is on spotify
    • I read mostly blog posts and web articles (via Hackero ;))
  • Communication
    • Telegram/WhatsApp/Email
  • Information
    • I am struggling to write this article as non-native speaker as I can't use Google translate
  • ...and so on

Short interruption: Because of other issues I had to change to another (slower) train. This one does not have WIFI at all...so now next level shit.


I sit here and have basically three options what to do:

  • Compose electronic music with LMMS, what I downloaded a few weeks ago but have no clue how to use it :'D
  • Code something in Go. Thanks Goland for your awesome build in standard lib documentation!
  • Write this article ranting about the German train situation and about myself of being so depended on a resource I thought about as natural as air

So here I am writing the article :D

Prevent such a situation in the future

So the biggest fail, is me not being prepared for off-line usage of my devices. So what will I do to prevent this in the future? Technical problems need technical solutions:

  • Entertainment
    • Music: Have at least some of my favorite playlists available offline
    • Movies: Actually I see it not as a big problem not binch-watching for some hours => Keeps me focused on working
    • Get a off-line “read it later” system. A while ago I used wallabag and will reinstall it on all my devices.
  • Communication
    • You actually can not do much about it...so nothing to improve here
    • If you do not have an off-line usable email and messaging client you should get yourself one. (Telegram has a nice desktop standalone) It is nice to at least be able to search trough archived emails/chats
  • Information
    • Off-line dictionary it is
    • Is there a Firefox/Chrome Plugin that save all the web pages I visit to an off-line storage? So that I can go back in my history and access the pages I visited before...if not I might code one.
  • Development work
    • There are a lot different off-line code documentation systems. I did choose zeal as it works on Linux and is standalone (the other ones work in the browser and as I most of the time surf in private mode they would not work for me, as I wipe the local storage at least every few days)
    • Off-line PHP server => Was actually quite easy. Did you know PHP has a build-in server? php -S localhost:8080
    • AWS lambda offline testing framework? No clue how to this yet...maybe a good topic for another blogpost
    • There are actually some clients for github with offline issue support. I will give them a try
    • Cache/save web resources locally. Maybe a good idea overall..better not include Google as dependency in your project as they will abuse that data you send them with every visitor
    • There is an (sadly old) StackOverflow dump, that could end up in some tool to search trough it...would be amazing. (but maybe will take a lot of disk space)

Oh girl, another thing came up: I have to show my train ticket, wich is a PDF attached to an email...that I never downloaded. What is going on here...my life goes nuts without Internet. Download your important tickets/documents


So overall this trip showed me how depending I am on the Internet and that I should change that. Please see this post as work in progress as I will update and add off-line tools when I get to know them and have more experience with them.

Overall there is one main learning: Download stuff instead of only opening it in the browser. (Same here with my university pdfs...never did download them for offline use, so no research for me no)

If someone was in this situation him or herself and found out other tools that helped I would love if you share them with me, so that I can introduce them into my stack and update this article.

So now I hope that the Edge Internet connection I have on my mobile Hotspot right now will be enough to upload this article :'D

Wish you an awesome (online) time!

Simon

p.S. Another thing I found: Check what applications are using Internet on your machine, so that if you only have low bandwidth this important resource does not get sucked away by an open Skype or so.


Did you like this post?

Donate: Donate Button or Patreon

Feedback: Email_


RSS Feed – This work is licensed under Creative Commons Attribution 4.0 International License

No WIFI Icon made by Freepik from Flaticon is licensed by CC 3.0 BY