Tips on docker-compose

Docker is a technology to create containers. What if we have multiple contains and need them to work together? Docker compose comes to rescue. The configuration is a yaml file and it's easy to follow. I'm going to write down some tips when I learned to use it.

Keep the Docker Container Running

The docker container is supposed to run only one process. The container stops when the process exits. So if we have a process that runs in the forground, it'll most likely exit because there is no console or input. For this kind of container, usually we run the command docker run -ti <image name> to keep the container running. In docker-compose world, you need to put this in the configuration for the service.

tty: true
stdin_open: true

Expose the Network Ports

In running docker, we can use docker -P to expose all the ports specified in the Dockerfile. We can also use docker -p [] to expose the specified ports in the command line. However, docker -P doesn't guarantee to map those ports from the container to the same ports on the host. The mapping is random. We can explicitly set those mapping using docker -p []. docker-compose by default will create a bridge for the specified containers. Depending on how we write it, we can expose the port only to that bridge network or also in the host.

- "8080"

This will only expose the ports to the linked services.

- "8080:80"

This also expose the ports on the host. It maps the port 8080 on host to the port 80 on the container.

Connect to the Linked Service in a Container

We specify the service in the configuration. docker-compose will bring up one or multiple container running the same thing for the service. Even if you don't specify the networks, docker-compose still creates a default one for you. You can find all the network by docker network ls. The name of the default network for your docker-compose containers is derived from the directory of the docker-compose.yml file. For example, docker-compose.yml is in the directory 'example'. The default network for those containers is 'example_default'. This may change but this is the behavior right now.

How do you connect to other containers brought up by docker-compose? There are two ways:

  1. Use the service name. For example, in docker-compose.yml, you specify the service like
    image: 'centos:latest'
    image: 'ubuntu:latest'

    In the container that runs Centos, you can use bar to connect to the other container that runs Ubuntu.

  2. Use the ip address.
    When you find the network for your containers, you can use docker network inspect <network name> to inspect the details of the network. From there, you can find the ip address of the containers. For example:

        "Name": "example_default",
        "Containers": {
            "4a87be92f610b839c77f0fa87c7bdd13797ca4eac3c2061a4d8e66a1c5e9c867": {
                "Name": "example_foo_1",
                "EndpointID": "0752c01405904b702459560fcc1bc90ef0e24d3e769e8bc46d5edcf06944dba1",
                "MacAddress": "02:42:ac:15:00:03",
                "IPv4Address": "",
                "IPv6Address": ""
            "6244c44cab85df31d684abb5e0b2ce12ecfbef902e13c638d1ab50981b876142": {
                "Name": "example_bar_1",
                "EndpointID": "5b0edb758b080010f8c3fca92e5cb583724c9fefcc23c5148c8c3375517d0093",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "",
                "IPv6Address": ""

By far, those are the things I think it's useful when using docker-compose. I spent some time on searching online. I hope this can help you to try out the way to manage containers.

What Do We Need to Know about DNS

Not every programmer needs to know DNS. We tend to take it for granted. It's there. It's working. As we are moving to the cloud and Devops, it's inevitable that we need to deal with network configuration. It's important to understand some basics of DNS.

This is the place to add a custom domain to a web in Azure. It provides A record and CNAME record. What record type should I use?

Add a custom domain for Azure app service
Add a custom domain for Azure app service

What's DNS

If you still have some memory about basic network. IP address is what we use to locate a service on the Internet. But how many IP addresses do you remember? We rarely use the IP address directly. Those are hard-to-remember long numbers. Instead, we remember domain names. What is a domain name? When you type in the browser address bar like, that is the domain name. But we use the IP address. How do we find the IP address by the domain name? That comes the DNS (domain name systems). It resolves the domain name to the IP address.

DNS Hierarchy

You don't need to worry about setting up DNS for your network at home. When the modem connects, it will set up the DNS. Usually it is the ISP DNS. Does it know all the domains? No. It inquires other kinds of DNS servers. These DNS servers together resolve the domain name to the IP address.

DNS servers
dns server

At the top, it is the root server. The root server doesn't know what the domain name map to. But it knows what top level domain server (TLD) is. The top level domain is something like .com, .net etc. For example, goes to the. com top level domain server. TLD server goes to authoritative servers to find the IP address.

The ISP server can cache the result. So it doesn't waste time to go to root servers and other servers.

DNS Records

DNS records are stored in the DNS servers. The record contains information about how to resolve the DNS inquiry. For example, if there is a record like [], and when you inquire, it returns the IP address.

There are three types I want to cover here. CNAME (Canonical Name record). It sets up a domain name to its canonical name. In another word, it creates an alias to the canonical name. For example, for this record [], when you inquire, it'll return the canonical name The CNAME record doesn't contain the IP address.

The second type of record is A or AAA. It maps from the the domain name to the IP address. Record A maps to the IPv4 address and AAA maps to the IPv6 address.

The third type of record is TXT. It allows to add any human readable text to the domain. You can also add machine readable contents too. What is the content? It's up to you. Consider this is a note to the domain.

How to Get the IP Address

When the browser needs to get the IP address for the domain in the address bar, it'll send the inquiry to the DNS server configured for the network. The DNS server will ask for that following the DNS hierachy. If the domain in the address bar is an alias, it'll get a CNAME record. It then restart the process again with the canonical name. This time it'll receive the A record or AAA record which contains the IP address.

By now, hopefully you have a clearer answer to what to choose when you try to add a new custom donmain to the web in Azure. The DNS is the infrastracture of the Internet. We all should know what the underlying works when developing in the cloud era. Cloud computing has already taken care of a lot of stuffs. We still need to know the basics to be better working in the cloud.

Get Started with Scheme

Scheme is one of the three major dialects of the programming language Lisp. It has many similar syntax as Lisp but features minimalism. In this post, I'll summarize basic Scheme syntax after I read this book to get started with Scheme. That should be helpful to get started since I already have some programming experience.

Everything Meaningful Is in a List Form

The basic syntax is in the form (obj1 obj2 ...) or(expression1 expression2 ...). For example, (1 2 3) is a list that contains 1, 2, and 3. Each element can be of different types. So it can contain strings too. For example, (1 2 'hello'). If it is a procedure call, the first expression is the procedure name. For example (+ 1 2). The first element is +. It's the sum operator. It adds the numbers from the remaining of the list.

We can nest expressions. Each expression itself can be in the list form. That being said, the first one can be in the list form which returns a procedure. And then apply the remaining elements to the returned procedure.

You can have a single symbol or value too. For example "Hello" will generate "Hello". And 2 will generate 2. But if you want to have a meaningful operation, you should use the list form.

Basic Operations

These are the basic operations

  1. (quote expression)

    As I said before, the data and procedure call are in the same form. quote will force to treat the remaining as data. For example, (+ 1 2) is to calculate the sum of 1 and 2, which it generates 3. (quote (+ 1 2)) will always treat (+ 1 2) as data. + becomes the symbol instead of the operation. So it generates (+ 1 2) instead of doing the calculation. We can use ' to replace quote for abbreviation. So '(+ 1 2) is the same as (quote (+ 1 2)).

  2. (car list-expression) returns the first element in the list

    (car '(1 2 3)) => 1

  3. (cdr list-expression) returns the list that contains the elements except the first one.

    (cdr '(1 2 3)) => (2 3)

  4. (cons obj1 obj2) generates a new pair in which obj1 is the first element and the remaining are from obj2.

    It can generate a proper list or an improper list. For example,
    (cons 'a 'b) => (a . b) ; an improper list
    (cons 'a '(b c)) => (a b c) ; a proper list

Variable, Expression and Procedure

The syntax to define a lambda expression is

(lambda (var ...) body1 body2 ...)

The syntax to define a top level variable or procedure is

(define var expression)

The syntax to create a variable at the local scope is

(let ((var expression) ...) body1 body2 ...)

We can use both define and let to create a variable or a procedure.
For example, (define count 0) creates a variable count with the value 0.
(let count 0) also creates a variable count with the value 0.
To define a procedure, it's in the form

(define sum
    (lambda (x y)
        (+ x y)))

This creates a procedure sum at the top level. It sums up the two given arguments.

The difference between define and let is that define creates the variable or procedure at the top level. Any other code can reference to it. let creates it inside the let scope. That means, the variable or the procedure isn't available outside of let code.
We can nest let statements. In this case, the variable or procedure with the same name will shadow the one from outer scope. Here is the code to demonstrate it.

(define var 1)
(let ((var 2))
    (let ((var 3))
        (display var)
    (display var)
(display var)

(display) is used to show the argument on the output. (newline) shows a new line. The output of this code is


The variable var in the inner scope shadows the one from outer scope.

Conditional Expression

We can do a if check in Scheme. The syntax is

(if test consequent alternative)

The test can be and, or, and not. The syntax is

(and expression ...)

(or expression ...)

(not expression)

Similarly to if, we can also do switch

(cond (test expression) ... (else expression))

There are special symbol. #t means true and #f means false.

Report errors

(assertion-violation symbol-of-scope message what-violates)

Set Assignment

(set! symbol value)

Improper List

The definition of a proper list uses recursion. A proper list is the list of which the (cdr) is a proper list. And an empty list is a proper list. If a list isn't a proper list, it's an improper list. An improper list is denoted by a '.'. For example (a . b) is an improper list and (a b) is a proper list. Below is an example in the code

(cons 'a 'b) => (a . b)
(cdr (a . b)) => b ; b is not a list
(cons 'a '(b)) => (a b)
(cdr (a b)) => (b); (b) is a list

These are the basic syntax and building block of Scheme. With that we can go on to the advanced Scheme and start writing programs.

Make It Your Habit to Fix Build Warnings

Warnings are not errors. Why do we bother to spend time fixing them? It feels so fast to get it compiled and run before looking into the warnings. It's because warnings may reveal that the code is not what you mean. It should be your habit to fix build warnings.

Errors or Warnings

Build errors are those that prevent you from getting a program or running it. Like it or not, you need to fix all the build errors. Warnings, on the other hand, don't have those restrictions. Anybody can ignore them and run the program. Luckily, the compiler usually has an option or switch to treat warnings as errors. By doing so, anybody has to address those warnings before they can continue to run the program. Having the compiler to treat warnings as errors is the only way to enforce everybody to fix the warnings too.

I'll show it how to do it in .Net Core and illustrate the importance of warnings by some examples.

Enable Warnings as Errors

There are two ways to enable warnings as errors.

The first one is to set the property TreatWarningsAsErrors in the project file. for example, in the csproj file,


On the other hand, <NoWarn> is used to exclude certain warnings


<WarningLevel> is used to set the warning level. The higher the number is, the more less servere warnings it'll report. The default value is 4.

Second, use the compiler option -warnaserror. For example,

dotnet build -warnaserror example.csproj

This will turn all warnings to errors. You can set specific warning with -warnaserror like -warnaserror:642,649. -nowarn on the other hand, disables certain warnings.

-warn is used to set the warning level.

The Warning Examples

  1. warning CS4014: Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call.

    This occurs when you forget to use await on any async call. It reveals a mistake in the code even though it's not a build error. There may be a subtle bug when the program runs. It's not always happening which makes it much difficult to reproduce the bug and diagnose. For example

    using (FileStream fs = File.Create(path))
         fs.WriteAsync(bytes, 0, byteSize, CancellationToken.None);

    Without await on fs.WriteAsync, it's possible that fs is being disposed while WriteAsync is still running. It all depends on the scheduling. So it's quite difficult to diagnose.

  2. warning CS1717: Assignment made to same variable; did you mean to assign something else?

    I'm sure you don't intend to assign to the same variable. It could be a typo. But you may not notice it in a quick glimpse. For example

    class Test
        private int count;
        public Test(int count)
            count = count;
    By treating warnings as errors, you're forced to look at the message and look at what's going on.


No all warnings indicate mistakes. Spend some time thinking about that. If you can justify it, you can suppress the warnings. One way of doing that is to use <NoWarn> in the project file or -nowarn in the compiler option. I don't recommend it because it applies to the whole project. I suggest to use pragma warning to disable the warning. Remember to enable the warning afterwards. For example

#pragma warning disable CS3021
#pragma warning restore CS3021

Take away

Always set the flag to treat warnings as errors in your build or compiler. Always fix warnings except you can justify not to. Always explicitly surpress certain warnings before the line and restore it afterward.

Dockerize Web Service

My current web services are running on the same virtual machine. They include the WordPress (a.k.a the blog), Nextcloud, and Redmine, etc. One way to keep them secure is to keep them updated. It can be a headache when I need to find new dependencies for the newer versions. It becomes more convinient when I dockerize the web services.

The drawback of putting on the virtual machine is

  • The packages coming with the OS on the virtual machine aren't always updated. Often time I have to run a custom script to install them.
  • Installing via custom scripts may leave the system on an unsustainable state. I forget what's done and how to undo it.
  • It's possible that different web services require different versions.

I think I'm done with those struggles every time I update the services. So I decide to put them into docker.

With docker, I can

  • Easily know what I install on the docker image.
  • Re-create the image and don't worry about undoing the custom scripts.
  • Isolate different services if they have conflicts on the packages.
  • Choose the base OS image if it is needed.

I spend some time writing the Dockerfile.

  1. Figure out the dependencies. Luckily, with the right OS image, I can just use its package manager to install instead of running custom scripts. This part is straightforward since I can read the documents.
  2. Use VOLUME and save the configuration on the host instead of inside the container. This can also save me a lot of efforts when I iterate the image building.

There are two technical challenges I have to tackle and spent most time.

  1. Certbot needs to use systemd to control apache when I add SSL. After a few search, I find docker systemctl replacement. It creates a script to replace the command. The author provides some Dockerfile examples for different images. I modify that to use a custom script running a bunch of other stuffs and systemctl start httpd followed by bash. The script needs to start a command that's long running. The container will stop after the command ends.
  2. Use cron inside the container. I find a good discussion on Stack Overflow. The main steps in Dockerfile are
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron

There are however something I can do only when the system runs. That's done in the custom script I set to run in CMD. I use flags to make sure it only runs once for that container.

With the change, I remove the unnecessary packages and settings from my virtual machine. I can control how the environment is setup for different web services.

Use Azure Blob to store files

In this post, I use Azure Storage API to upload files to azure blob container. What's more, I use FileSystemWatcher to monitor file changes in a directory, and update the blob when the file is changed. This can back up files to Azure Blob. To use it, you need to have a Azure Storage account. You can test it locally using Azure Storage Emulator.

I'm using .NET Core 3.0 on Linux. First, let's create the project named azureblob and add necessary packages

dotnet new console -n azureblob 
dotnet add package Microsoft.Azure.Storage.Blob 
dotnet add package Newtonsoft.Json

The Azure blob api lives in Microsoft.Azure.Storage.Blob and I need Newtonsoft.Json to read the setting. Speaking of setting, I create this setting class:

[JsonObject(NamingStrategyType = typeof(SnakeCaseNamingStrategy))]
public class Settings
    public string BlobConnectionString { get; set; }
    public string BlobContainer { get; set; }
    public string MonitoredDirectory { get; set; }

Correspondingly, the setting file looks like this.

     blob_container: "azureblobtutorial",
     blob_connection_string: "<ReplaceWithYourStorageConnectionString. You can find the one for Azure Storage Emulator from the doc.>",
     monitored_directory: "<ReplaceWithYourDirectory>"

Next, I create a class to call the Azure Blob API. The key is to create the CloudBlobClient and get the blob container.

var storageAccount = CloudStorageAccount.Parse(connectionString);
var blobClient = storageAccount.CreateCloudBlobClient();
this._blobContainer = blobClient.GetContainerReference(blobContainer);
this._requestOptions = new BlobRequestOptions();

Before uploading or deleting a blob, we should get a reference to the blob by its name. I use the file path as the name here

var blob = await this._blobContainer.GetBlobReferenceFromServerAsync(filePath, cancellationToken);

Then we can use the blob to upload or delete a file from Azure Blob.

await blob.DeleteIfExistsAsync(cancellationToken);

Those are basic operations on an Azure Blob. Next we should monitor the file changes in the directory set in monitored_directory. We use FileSystemWatcher. I need to set up the filter to listen to the right events and the event handlers as well

this._watcher = new FileSystemWatcher(monitoredDirectory);
this._watcher.NotifyFilter = NotifyFilters.LastWrite |
                                NotifyFilters.Size |
                                 NotifyFilters.FileName |
this._watcher.IncludeSubdirectories = true;
this._watcher.Changed += this.OnFileChanged;
this._watcher.Created += this.OnFileChanged;
this._watcher.Renamed += this.OnFileRenamed;
this._watcher.Deleted += this.OnFileChanged;
this._watcher.Error += this.OnFileWatchError;
this._watcher.EnableRaisingEvents = true;

Whenever I receive a created/deleted/changed event, in OnFileChanged, I'll eventually trigger a upload or delete on the blob. The Renamed is treated as deletion (on the old one) and creation (on the new one).

The complete code is this commit in this github repos. It still requires some more work to be able to fully work correctly in backing up files in Azure Blob.

  1. When a directory is renamed, it doesn't automatically update the names of the blobs for the files/subdirectories under it.
  2. It doesn't implement differential update. A small change on the file will upload the whole file. This can cause bandwidth for a large file.
  3. When there are frequent changes on the same file, it doesn't batch the changes. It'll upload the whole file that many times.

Regardless, it demonstrates how to use the Azure Blob in a program, as well as the file change.

Redirect Assembly Binding

In a large .Net project, it can be inevitable to have complex dependencies. To make it worse, multiple dependencies may have the dependency on the same assembly but different versions. There is already a way to redirect to bind a different version of assembly in your app. This document outlines how to do it to an application. Sometimes, that's not enough.

The document outlines these approaches

  1. The vendor of assemblies includes a publisher policy file with the new assembly.
  2. Specify the binding in the configuration file at the application level
  3. Specify the binding in the configuration file at the machine level.

The first approach requires the vendor to publish the publisher policy file. The file has to be in global assembly cache which will affect every application on the machine.

What if the vendor doesn't provide this file. Then we can specify the binding in the configuration file by using <bindingRedirect>. The configuration file can be applied to the specified application if it's at the application level or every application if it's at the machine leve.

What if there is not a publisher policy file, or there is no configuration file for the binding at the application level or the machine level? Is it possible to have it happening when you're writing your own application. Probably not. This issue probably happens when you write a plugin or some assembly that're run in a different aplication that you don't own. For example, you're writing a test that's run by vstest. You use a libary A which has a dependency on assembly B version 1.0, and you also use a library C which has a dependency on assembly B version 2.0. At runtime, one version of the assembly B will not be loaded. You don't own the assembly B, and you don't own the application that runs your assembly. Because of that, you cannot count on the publisher policy file or the application-level configuration file. You don't want to create a machine-level configuration file either. There is no assembly level configuration file. The assembly level configuration file is ignored at runtime. I think the best bet of solving it is to load the dependency in the program by yourself. When the runtime doesn't find the right assembly, it raises the event AppDomain.AssemblyResolve.

How do we use AppDomain.AssemblyResolve? The basic idea is:

  1. Check whether the assembly is loaded.
  2. If it's loaded, and if the loaded version satisfies your requirements, then return the loaded one.
  3. If the assembly isn't loaded, and you find one that satisfied your requirements, you can call Assembly.LoadFile to load the assembly and return it.

In a pseudo code, it is

static Assembly OnAssemblyResolve(object sender, ResolveEventArgs args)
    if (args.Name.Contains("AssemblyB"))
        foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies())
            if (assembly.FullName.Contains("AssemblyB"))
                return assembly;

        return Assembly.LoadFile("PathToAssemblyB");

    return null;

There are, however, some caveats. First, it's at the app domain level, meaning it may impact every assembly in the same app domain. AppDomain.AssemblyResolve passes an event parameter ResolveEventArgs. It has a property ResolveEventArgs.RequestingAssembly to indicate which assembly is requesting to load the one that cannot be resolved. You can use it to make sure that you're loading the assembly in the right context. Second, if you happen to use one of Assembly.Load overloads and it causes AssemblyResolve event, you'll get a stack overflow. You can check out this guidance.

Using well, I think AppDomain.AssemblyResolve can supplement configuration file in handling assembly binding issues in the application.

Update Azure Bot Using Command Line

We can of course manage Azure Bot Service in different ways, for example, from the portal, from Visual Studio, or from command line. I like to use command line. It's convenient: I don't need to navigate the UI in the portal or Visual Studio. I just execute the same command (or last command) from the command line. We can create, publish, and update an Azure bot effectively.

Azure Bot Service Documentation is a good start to learn to develop Azure bot. There is a section about deploy the bot using cli. It covers the command to create and publish the bot. az bot create to create a new bot. az bot publish to publish your code to the bot.

But wait. What if I already have a bot published. I've spent so much hours in debugging my code and making my bot more intelligent. I want to have my bot run the new code. Of course, you can do that from Visual Studio. I would like to use command line. Here is the command:
az bot update --name <BotName> --resource-group <GroupName>
Run this in the top directory of your code. For example, if /path/to/BotCodeInJavaScript contains your code. It's the directory you run the command.

That's it. Your published bot is smarter.

Bad const, Bad enum

Many languages have const and enum. The compiler treats enum values as constan integers too. Enum can be as bad as how const can be. In that sense, I'll use const as an example to demonstrate how they will go wrong.

The Good Side of a const

The meaning of const is, as it indicates, that the value is a constant. There are run time constants or compile time constants. The run time constant means that the value doesn't change when the program runs. The compile time constant indicates that the programmer shouldn't change the value of the variable. The compiler will forbid any assignments to the const except the first initialization. The are at the good side of a const when you don't want to change the value of the variable. It's encouraged in general. The compiler can also use the knowledge to optimize your code.

When It Goes Bad

It'll cause problems when you use  a const in a shared library (.so) or dynamic library (.dll). Let me demonstrate it with an example in C++ on Linux. It'll be the same in C++ on other platforms or C# too.

1. Create a header file with a const in the class: ConstHeader.h

#ifndef __CONSTHEADER_H__
#define __CONSTHEADER_H__

const int TestConst = 10;

class ConstHeader

	int get_num() const;

	const int num;

#endif //__CONSTHEADER_H__

2. Create source file ConstHeader.cpp

#include "ConstHeader.h"

ConstHeader::ConstHeader() :

int ConstHeader::get_num() const
	return num;

3. Create the program that uses the const: UseConst.cpp

#include "ConstHeader.h"

using namespace std;

int main(int argc, char** argv)
	ConstHeader header;
	cout << "number in executable " << TestConst << endl
		<< "number in library " << header.get_num() << endl;
	return 0;

4. Compile ConstHeder.cpp into a shared library

$ g++ -shared -Wl,-soname, -o ConstHeader.cpp

5. Create the program linking to the shared library

$ g++ -o UseConst UseConst.cpp -L. -lConstHeader

6.Run the program

$ LD_LIBRARY_PATH=. ./UseConst
number in executable 10
number in library 10

That looks pretty good. The program uses the same const value as the one in the shared library.

Now what happen if we update the shared library?
Let's change the value of the const TestConst in the shared library

const int TestConst = 20;

Create the shared library and run the program again without recompiling

$ g++ -shared -Wl,-soname, -o ConstHeader.cpp
$ LD_LIBRARY_PATH=. ./UseConst
number in executable 10
number in library 20

Ooops. When it uses the const directly, it gets 10. While the shared library shows the value is 20.

What's Wrong

Let us pause a minute to think about what's changed. You use a const integer TestConst in a shared library. But the library is updated. Pay extra attentions to the const that are defined in a shared library. Sometimes, when the value is changed. it'll be pain in the ass to debug it. This is the same to an enum's implicit value. For example:

enum class Color
	red = 0,

If this is from a third party library and the enum Color is changed in a new version. E.g.

enum class Color
	red = 0,

Your program will be broken if you use Color::yellow and Color::blue and don't compile against to the updated header.

All the const and enums defined in the header files can be accessed from a separate compile unit. They are actually interfaces, part of the contract. As a library user, when you use an interface, you expect that the same interface doing the same things in all the versions of the library. Your application relies on that to function well. As a library author, you don't want to drive your user crazy. Don't change the public interfaces.

How to Mitigate It

It depends on your purpose. As a library author, if you just want to provide a well defined value to the library user, use a function to return the value. This will have some overhead in function call, but you can change the value in future version. In C#, you also can declare the variable readonly instead of const. Either way it won't become a compile time constant. Instead, the run time will read the value from your library and it still cannot be changed.

For enum, it'll be a little complicated. The first approach is that you always append the new enumerator at the end. Take Color as an example, instead of adding green in between red and blue, you always add the new enumerator after the last one: yellow. A second approach is that you always set explicit value to the enumerator as what we do to red. There are problems in collaborative work in a large team. There is no way to enforce a person to append to the last in the first approach. For the second approach, two people may use the same explicit value for new enumerators in their work, and they all check in at the same time. Comments in the code. That won't always help.

Remember, public consts and enums are interfaces. Don't change them. This is the best option to prevent them going bad.

Make Investment Works

From school, training, and work, we'll get deeper knowledge in software development. It's not uncommon we lack investment education since schools don't teach everyone, unless we happen to be in the related majors. I decide to write something about investment to reflect what I've learned by myself so far.

Financial advisor

A Financial advisor is a person that provides financial advice or guidance to clients. They can be insurance agents, investment managers, tax advisors, real estate planners, etc. Investment managers deal with their clients' investment portfolios.  It may be good to have your money managed by experts. But you have to be cautious in choosing a financial advisor. They will charge you fees for compensations. Some charges 1% - 1.5% of the assets they manage. Some charge by hour. Regardless your criteria, please add this: a fiduciary. A fiduciary has a higher ethical standard and is required to act on behalf of his or her clients benefits. That means, if the client had all the prerequisites and the information, the client would have taken the same action. A non-fiduciary is only required to make suitable and reasonable action, which may not be the client's best benefits.

Actively managed fund and passively managed fund

An actively managed fund is always attended by a manager, who is supposed to be an expert. They pick the stocks, bonds and other investment vehicles in the funds. A passively managed fund, on the other hand, doesn't requires a fund manager to pick for the funds. They usually match the index they follow. With less human intervening, it incurs less fees to manage the fund.

You may believe actively managed fund will perform well, since  an expert always keeps an eye on it and adjusts immediately to the market change. You remember the chart of the fund always looks good. For example in the screenshot, FSPTX outperforms the benchmark. (Note the chart is for illustration only. It doesn't mean the fund is good or bad). Is that the whole story? Do you notice the text below it? The performance data featured represents past performance, which is not guarantee of future results.

Example Fund Prospect


This study by S&P Down Jones Indices in 2016 shows that 90 percent of actively managed funds fail to outperform their index targets over the past one-year, five-year and 10-year periods. Why don't you find such funds from your fund managers? The funds can be discontinued. Fund managers don't want you to lose confidence.

One major factor in the under performance is the fee in actively managed funds. The cost can be the management fee, trading commission, etc. All financial advisers need compensations. It doesn't matter how well the fund performs. What really matters is how much you get after the fees.  Let's look at the fee of the fund FSPTX.

Example Fund Fee


Its expense ratio is 0.77% and its Exp Cap (Voluntary) is 1.15%. The expense ratio is what you pay right now. Exp Cap is the limit of the fee you may pay in future, which means you might end up paying 1.15% fee. For example, you have $10000 in investment and the market value doesn't change in 10 years. If the expense ratio at 0.77%, you pay $10000 * 10 * 0.77% = $770, If the expense ratio is 1.15%, you pay  $10000 * 10 * 1.15% = $1150. That's $380 more for $10000, even when you don't have any gains. The fee of a passively managed fund can be lower than 0.1%.  I'm sure you see the difference. If you worry about the fee, you may be scared when you know Expense Cap may be terminated or revised at any time. You don't have control how much you're charged.

First, most funds cannot outperform the index. You already don't have too many gains. Second, you need to pay more in fees to have an actively managed fund. Passively managed fund most likely just mirror the index it's tracking. It doesn't requires too many fees to get similar performance as an index. An actively managed fund needs perform much better than a passively managed fund to give you the equal gains, after you pay the fees. Those make passively managed fund more attractive than actively managed fund.

Diversification and re-balance

Don't put all eggs in one basket. For example, if you only invest in one company's stock, your return of investment is the same as the company stock. When the company has a rough year, or even goes bankruptcy, you may lose everything. The best way to reduce the risk is to diversify. You can diversify within the same category. For example, you buy 25 – 30 unrelated stocks. Or you can diversify across all categories. You have everything in your portfolio: stocks, bonds, precious metal, etc. The theory is that the same economy data won't affect all of them in the same way. It may hit one stock or one category heavily but be neutral or good for another one. In the end the positive smooths out the negative. You'll need to decide how to allocate your investment, depending on you risk tolerance and investment goal.

What strikes me is re-balance. It has a twofold effect. First, it shields you from more risks. Second, it is a simple way to buy at the bottom/sell at the top. For example, you have a portfolio with 80% stocks and 20% bonds. If the stocks grow to 90%. That may mean the stock is going up. Because it has a cycle, it'll going down some day. If you keep investing in stocks and the stock market crashes, you'll lose a lot. Why not re-balance and have more in bonds? You may sell some stocks and buy more bonds, or invest more in the bonds. In the end, you will have 80% in stocks and 20% in bonds. That may not be the top when you sell the stock. But who knows. Don't try to time the market. You're reducing the risks and having some gains.

Few can guarantee you can get in in and out of the market at the right time. But there are many ways to reduce the risks. Diversification and re-balance are good tools in your risks management.


Lump-sum vs recurring

Do you save the money until it's large enough before you investment? Or do you set aside smaller money in your paycheck and invest every month? The first approach is lump sum and the second one is recurring.

They both have advantages and disadvantages. For lump sum, if you buy when the market is at the bottom, you'll get much larger gains. Every rise will count as your gains afterwards. Remember to sell them at the right time too. You need much luck in both time. If the market crashes after you buy, you need to wait a long time before getting even. The second, regardless how the market is, you keep investing even with smaller money. For some money, you may lose. For some other money, you get gains. When you spread out investment like this, you're also reducing the risks. Lump sum can give you much higher return when you do that at the right time. Recurring on the other hand, can reduce your risks of timing the market wrong.


I've been talking most about avoiding the fees and reducing the risks. What really makes it works is discipline. Once you decide how to allocate your investment, how to re-balance, and whether lump-sum or recurring, don't let good or bad news in the market sway you. You need to research and analyze before you make any changes. Try to do it rationally, not emotionally. The best way to get more gains is to time the market right. That's almost impossible. Luck is much more important than expertise. The second best way is to reduce risks. Make sure you don't lose money before you get gains.