Get Started with Scheme

Scheme is one of the three major dialects of the programming language Lisp. It has many similar syntax as Lisp but features minimalism. In this post, I'll summarize basic Scheme syntax after I read this book to get started with Scheme. That should be helpful to get started since I already have some programming experience.

Everything Meaningful Is in a List Form

The basic syntax is in the form (obj1 obj2 ...) or(expression1 expression2 ...). For example, (1 2 3) is a list that contains 1, 2, and 3. Each element can be of different types. So it can contain strings too. For example, (1 2 'hello'). If it is a procedure call, the first expression is the procedure name. For example (+ 1 2). The first element is +. It's the sum operator. It adds the numbers from the remaining of the list.

We can nest expressions. Each expression itself can be in the list form. That being said, the first one can be in the list form which returns a procedure. And then apply the remaining elements to the returned procedure.

You can have a single symbol or value too. For example "Hello" will generate "Hello". And 2 will generate 2. But if you want to have a meaningful operation, you should use the list form.

Basic Operations

These are the basic operations

  1. (quote expression)

    As I said before, the data and procedure call are in the same form. quote will force to treat the remaining as data. For example, (+ 1 2) is to calculate the sum of 1 and 2, which it generates 3. (quote (+ 1 2)) will always treat (+ 1 2) as data. + becomes the symbol instead of the operation. So it generates (+ 1 2) instead of doing the calculation. We can use ' to replace quote for abbreviation. So '(+ 1 2) is the same as (quote (+ 1 2)).

  2. (car list-expression) returns the first element in the list

    (car '(1 2 3)) => 1

  3. (cdr list-expression) returns the list that contains the elements except the first one.

    (cdr '(1 2 3)) => (2 3)

  4. (cons obj1 obj2) generates a new pair in which obj1 is the first element and the remaining are from obj2.

    It can generate a proper list or an improper list. For example,
    (cons 'a 'b) => (a . b) ; an improper list
    (cons 'a '(b c)) => (a b c) ; a proper list

Variable, Expression and Procedure

The syntax to define a lambda expression is

(lambda (var ...) body1 body2 ...)

The syntax to define a top level variable or procedure is

(define var expression)

The syntax to create a variable at the local scope is

(let ((var expression) ...) body1 body2 ...)

We can use both define and let to create a variable or a procedure.
For example, (define count 0) creates a variable count with the value 0.
(let count 0) also creates a variable count with the value 0.
To define a procedure, it's in the form

(define sum
    (lambda (x y)
        (+ x y)))

This creates a procedure sum at the top level. It sums up the two given arguments.

The difference between define and let is that define creates the variable or procedure at the top level. Any other code can reference to it. let creates it inside the let scope. That means, the variable or the procedure isn't available outside of let code.
We can nest let statements. In this case, the variable or procedure with the same name will shadow the one from outer scope. Here is the code to demonstrate it.

(define var 1)
(let ((var 2))
    (let ((var 3))
        (display var)
        (newline))
    (display var)
    (newline))
(display var)
(newline)

(display) is used to show the argument on the output. (newline) shows a new line. The output of this code is

3
2
1

The variable var in the inner scope shadows the one from outer scope.

Conditional Expression

We can do a if check in Scheme. The syntax is

(if test consequent alternative)

The test can be and, or, and not. The syntax is

(and expression ...)

(or expression ...)

(not expression)

Similarly to if, we can also do switch

(cond (test expression) ... (else expression))

There are special symbol. #t means true and #f means false.

Report errors

(assertion-violation symbol-of-scope message what-violates)

Set Assignment

(set! symbol value)

Improper List

The definition of a proper list uses recursion. A proper list is the list of which the (cdr) is a proper list. And an empty list is a proper list. If a list isn't a proper list, it's an improper list. An improper list is denoted by a '.'. For example (a . b) is an improper list and (a b) is a proper list. Below is an example in the code

(cons 'a 'b) => (a . b)
(cdr (a . b)) => b ; b is not a list
(cons 'a '(b)) => (a b)
(cdr (a b)) => (b); (b) is a list

These are the basic syntax and building block of Scheme. With that we can go on to the advanced Scheme and start writing programs.

Make It Your Habit to Fix Build Warnings

Warnings are not errors. Why do we bother to spend time fixing them? It feels so fast to get it compiled and run before looking into the warnings. It's because warnings may reveal that the code is not what you mean. It should be your habit to fix build warnings.

Errors or Warnings

Build errors are those that prevent you from getting a program or running it. Like it or not, you need to fix all the build errors. Warnings, on the other hand, don't have those restrictions. Anybody can ignore them and run the program. Luckily, the compiler usually has an option or switch to treat warnings as errors. By doing so, anybody has to address those warnings before they can continue to run the program. Having the compiler to treat warnings as errors is the only way to enforce everybody to fix the warnings too.

I'll show it how to do it in .Net Core and illustrate the importance of warnings by some examples.

Enable Warnings as Errors

There are two ways to enable warnings as errors.

The first one is to set the property TreatWarningsAsErrors in the project file. for example, in the csproj file,

<PropertyGroup>
  <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>

On the other hand, <NoWarn> is used to exclude certain warnings

<NoWarn>$(NoWarn);CS0168;CS0219</NoWarn>

<WarningLevel> is used to set the warning level. The higher the number is, the more less servere warnings it'll report. The default value is 4.

Second, use the compiler option -warnaserror. For example,

dotnet build -warnaserror example.csproj

This will turn all warnings to errors. You can set specific warning with -warnaserror like -warnaserror:642,649. -nowarn on the other hand, disables certain warnings.

-warn is used to set the warning level.

The Warning Examples

  1. warning CS4014: Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call.

    This occurs when you forget to use await on any async call. It reveals a mistake in the code even though it's not a build error. There may be a subtle bug when the program runs. It's not always happening which makes it much difficult to reproduce the bug and diagnose. For example

    using (FileStream fs = File.Create(path))
    {
         fs.WriteAsync(bytes, 0, byteSize, CancellationToken.None);
    } 

    Without await on fs.WriteAsync, it's possible that fs is being disposed while WriteAsync is still running. It all depends on the scheduling. So it's quite difficult to diagnose.

  2. warning CS1717: Assignment made to same variable; did you mean to assign something else?

    I'm sure you don't intend to assign to the same variable. It could be a typo. But you may not notice it in a quick glimpse. For example

    class Test
    {
        private int count;
        public Test(int count)
        {
            count = count;
        }
    }
    
    By treating warnings as errors, you're forced to look at the message and look at what's going on.

Exceptions

No all warnings indicate mistakes. Spend some time thinking about that. If you can justify it, you can suppress the warnings. One way of doing that is to use <NoWarn> in the project file or -nowarn in the compiler option. I don't recommend it because it applies to the whole project. I suggest to use pragma warning to disable the warning. Remember to enable the warning afterwards. For example


#pragma warning disable CS3021
//code
#pragma warning restore CS3021

Take away

Always set the flag to treat warnings as errors in your build or compiler. Always fix warnings except you can justify not to. Always explicitly surpress certain warnings before the line and restore it afterward.

Dockerize Web Service

My current web services are running on the same virtual machine. They include the WordPress (a.k.a the blog), Nextcloud, and Redmine, etc. One way to keep them secure is to keep them updated. It can be a headache when I need to find new dependencies for the newer versions. It becomes more convinient when I dockerize the web services.

The drawback of putting on the virtual machine is

  • The packages coming with the OS on the virtual machine aren't always updated. Often time I have to run a custom script to install them.
  • Installing via custom scripts may leave the system on an unsustainable state. I forget what's done and how to undo it.
  • It's possible that different web services require different versions.

I think I'm done with those struggles every time I update the services. So I decide to put them into docker.

With docker, I can

  • Easily know what I install on the docker image.
  • Re-create the image and don't worry about undoing the custom scripts.
  • Isolate different services if they have conflicts on the packages.
  • Choose the base OS image if it is needed.

I spend some time writing the Dockerfile.

  1. Figure out the dependencies. Luckily, with the right OS image, I can just use its package manager to install instead of running custom scripts. This part is straightforward since I can read the documents.
  2. Use VOLUME and save the configuration on the host instead of inside the container. This can also save me a lot of efforts when I iterate the image building.

There are two technical challenges I have to tackle and spent most time.

  1. Certbot needs to use systemd to control apache when I add SSL. After a few search, I find docker systemctl replacement. It creates a script to replace the command. The author provides some Dockerfile examples for different images. I modify that to use a custom script running a bunch of other stuffs and systemctl start httpd followed by bash. The script needs to start a command that's long running. The container will stop after the command ends.
  2. Use cron inside the container. I find a good discussion on Stack Overflow. The main steps in Dockerfile are
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron

There are however something I can do only when the system runs. That's done in the custom script I set to run in CMD. I use flags to make sure it only runs once for that container.

With the change, I remove the unnecessary packages and settings from my virtual machine. I can control how the environment is setup for different web services.

Use Azure Blob to store files

In this post, I use Azure Storage API to upload files to azure blob container. What's more, I use FileSystemWatcher to monitor file changes in a directory, and update the blob when the file is changed. This can back up files to Azure Blob. To use it, you need to have a Azure Storage account. You can test it locally using Azure Storage Emulator.

I'm using .NET Core 3.0 on Linux. First, let's create the project named azureblob and add necessary packages

dotnet new console -n azureblob 
dotnet add package Microsoft.Azure.Storage.Blob 
dotnet add package Newtonsoft.Json

The Azure blob api lives in Microsoft.Azure.Storage.Blob and I need Newtonsoft.Json to read the setting. Speaking of setting, I create this setting class:

[JsonObject(NamingStrategyType = typeof(SnakeCaseNamingStrategy))]
public class Settings
{
    public string BlobConnectionString { get; set; }
    public string BlobContainer { get; set; }
    public string MonitoredDirectory { get; set; }
}

Correspondingly, the setting file looks like this.

{
     blob_container: "azureblobtutorial",
     blob_connection_string: "<ReplaceWithYourStorageConnectionString. You can find the one for Azure Storage Emulator from the doc.>",
     monitored_directory: "<ReplaceWithYourDirectory>"
 }

Next, I create a class to call the Azure Blob API. The key is to create the CloudBlobClient and get the blob container.

var storageAccount = CloudStorageAccount.Parse(connectionString);
var blobClient = storageAccount.CreateCloudBlobClient();
this._blobContainer = blobClient.GetContainerReference(blobContainer);
this._blobContainer.CreateIfNotExists();
this._requestOptions = new BlobRequestOptions();
this._blobContainer.CreateIfNotExists(this._requestOptions);

Before uploading or deleting a blob, we should get a reference to the blob by its name. I use the file path as the name here

var blob = await this._blobContainer.GetBlobReferenceFromServerAsync(filePath, cancellationToken);

Then we can use the blob to upload or delete a file from Azure Blob.

await blob.DeleteIfExistsAsync(cancellationToken);

Those are basic operations on an Azure Blob. Next we should monitor the file changes in the directory set in monitored_directory. We use FileSystemWatcher. I need to set up the filter to listen to the right events and the event handlers as well

this._watcher = new FileSystemWatcher(monitoredDirectory);
this._watcher.NotifyFilter = NotifyFilters.LastWrite |
                                NotifyFilters.Size |
                                 NotifyFilters.FileName |
                                NotifyFilters.DirectoryName;
this._watcher.IncludeSubdirectories = true;
this._watcher.Changed += this.OnFileChanged;
this._watcher.Created += this.OnFileChanged;
this._watcher.Renamed += this.OnFileRenamed;
this._watcher.Deleted += this.OnFileChanged;
this._watcher.Error += this.OnFileWatchError;
this._watcher.EnableRaisingEvents = true;

Whenever I receive a created/deleted/changed event, in OnFileChanged, I'll eventually trigger a upload or delete on the blob. The Renamed is treated as deletion (on the old one) and creation (on the new one).

The complete code is this commit in this github repos. It still requires some more work to be able to fully work correctly in backing up files in Azure Blob.

  1. When a directory is renamed, it doesn't automatically update the names of the blobs for the files/subdirectories under it.
  2. It doesn't implement differential update. A small change on the file will upload the whole file. This can cause bandwidth for a large file.
  3. When there are frequent changes on the same file, it doesn't batch the changes. It'll upload the whole file that many times.

Regardless, it demonstrates how to use the Azure Blob in a program, as well as the file change.

Redirect Assembly Binding

In a large .Net project, it can be inevitable to have complex dependencies. To make it worse, multiple dependencies may have the dependency on the same assembly but different versions. There is already a way to redirect to bind a different version of assembly in your app. This document outlines how to do it to an application. Sometimes, that's not enough.

The document outlines these approaches

  1. The vendor of assemblies includes a publisher policy file with the new assembly.
  2. Specify the binding in the configuration file at the application level
  3. Specify the binding in the configuration file at the machine level.

The first approach requires the vendor to publish the publisher policy file. The file has to be in global assembly cache which will affect every application on the machine.

What if the vendor doesn't provide this file. Then we can specify the binding in the configuration file by using <bindingRedirect>. The configuration file can be applied to the specified application if it's at the application level or every application if it's at the machine leve.

What if there is not a publisher policy file, or there is no configuration file for the binding at the application level or the machine level? Is it possible to have it happening when you're writing your own application. Probably not. This issue probably happens when you write a plugin or some assembly that're run in a different aplication that you don't own. For example, you're writing a test that's run by vstest. You use a libary A which has a dependency on assembly B version 1.0, and you also use a library C which has a dependency on assembly B version 2.0. At runtime, one version of the assembly B will not be loaded. You don't own the assembly B, and you don't own the application that runs your assembly. Because of that, you cannot count on the publisher policy file or the application-level configuration file. You don't want to create a machine-level configuration file either. There is no assembly level configuration file. The assembly level configuration file is ignored at runtime. I think the best bet of solving it is to load the dependency in the program by yourself. When the runtime doesn't find the right assembly, it raises the event AppDomain.AssemblyResolve.

How do we use AppDomain.AssemblyResolve? The basic idea is:

  1. Check whether the assembly is loaded.
  2. If it's loaded, and if the loaded version satisfies your requirements, then return the loaded one.
  3. If the assembly isn't loaded, and you find one that satisfied your requirements, you can call Assembly.LoadFile to load the assembly and return it.

In a pseudo code, it is

static Assembly OnAssemblyResolve(object sender, ResolveEventArgs args)
{
    if (args.Name.Contains("AssemblyB"))
    {
        foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies())
        {
            if (assembly.FullName.Contains("AssemblyB"))
            {
                return assembly;
            }
        }

        return Assembly.LoadFile("PathToAssemblyB");
    }

    return null;
}

There are, however, some caveats. First, it's at the app domain level, meaning it may impact every assembly in the same app domain. AppDomain.AssemblyResolve passes an event parameter ResolveEventArgs. It has a property ResolveEventArgs.RequestingAssembly to indicate which assembly is requesting to load the one that cannot be resolved. You can use it to make sure that you're loading the assembly in the right context. Second, if you happen to use one of Assembly.Load overloads and it causes AssemblyResolve event, you'll get a stack overflow. You can check out this guidance.

Using well, I think AppDomain.AssemblyResolve can supplement configuration file in handling assembly binding issues in the application.

Update Azure Bot Using Command Line

We can of course manage Azure Bot Service in different ways, for example, from the portal, from Visual Studio, or from command line. I like to use command line. It's convenient: I don't need to navigate the UI in the portal or Visual Studio. I just execute the same command (or last command) from the command line. We can create, publish, and update an Azure bot effectively.

Azure Bot Service Documentation is a good start to learn to develop Azure bot. There is a section about deploy the bot using cli. It covers the command to create and publish the bot. az bot create to create a new bot. az bot publish to publish your code to the bot.

But wait. What if I already have a bot published. I've spent so much hours in debugging my code and making my bot more intelligent. I want to have my bot run the new code. Of course, you can do that from Visual Studio. I would like to use command line. Here is the command:
az bot update --name <BotName> --resource-group <GroupName>
Run this in the top directory of your code. For example, if /path/to/BotCodeInJavaScript contains your code. It's the directory you run the command.

That's it. Your published bot is smarter.


Bad const, Bad enum

Many languages have const and enum. The compiler treats enum values as constan integers too. Enum can be as bad as how const can be. In that sense, I'll use const as an example to demonstrate how they will go wrong.

The Good Side of a const

The meaning of const is, as it indicates, that the value is a constant. There are run time constants or compile time constants. The run time constant means that the value doesn't change when the program runs. The compile time constant indicates that the programmer shouldn't change the value of the variable. The compiler will forbid any assignments to the const except the first initialization. The are at the good side of a const when you don't want to change the value of the variable. It's encouraged in general. The compiler can also use the knowledge to optimize your code.

When It Goes Bad

It'll cause problems when you use  a const in a shared library (.so) or dynamic library (.dll). Let me demonstrate it with an example in C++ on Linux. It'll be the same in C++ on other platforms or C# too.

1. Create a header file with a const in the class: ConstHeader.h

#ifndef __CONSTHEADER_H__
#define __CONSTHEADER_H__

const int TestConst = 10;

class ConstHeader
{
public:
	ConstHeader();

	int get_num() const;

private:
	const int num;
};

#endif //__CONSTHEADER_H__

2. Create source file ConstHeader.cpp

#include "ConstHeader.h"

ConstHeader::ConstHeader() :
	num(TestConst)
{
}

int ConstHeader::get_num() const
{
	return num;
}

3. Create the program that uses the const: UseConst.cpp

#include 
#include "ConstHeader.h"

using namespace std;

int main(int argc, char** argv)
{
	ConstHeader header;
	cout << "number in executable " << TestConst << endl
		<< "number in library " << header.get_num() << endl;
	return 0;
}

4. Compile ConstHeder.cpp into a shared library

$ g++ -shared -Wl,-soname,libConstHeader.so -o libConstHeader.so ConstHeader.cpp

5. Create the program linking to the shared library

$ g++ -o UseConst UseConst.cpp -L. -lConstHeader

6.Run the program

$ LD_LIBRARY_PATH=. ./UseConst
number in executable 10
number in library 10

That looks pretty good. The program uses the same const value as the one in the shared library.

Now what happen if we update the shared library?
Let's change the value of the const TestConst in the shared library

const int TestConst = 20;

Create the shared library and run the program again without recompiling

$ g++ -shared -Wl,-soname,libConstHeader.so -o libConstHeader.so ConstHeader.cpp
$ LD_LIBRARY_PATH=. ./UseConst
number in executable 10
number in library 20

Ooops. When it uses the const directly, it gets 10. While the shared library shows the value is 20.

What's Wrong

Let us pause a minute to think about what's changed. You use a const integer TestConst in a shared library. But the library is updated. Pay extra attentions to the const that are defined in a shared library. Sometimes, when the value is changed. it'll be pain in the ass to debug it. This is the same to an enum's implicit value. For example:

enum class Color
{
	red = 0,
	blue,
	yellow,
};

If this is from a third party library and the enum Color is changed in a new version. E.g.

enum class Color
{
	red = 0,
	green,
	blue,
	yellow,
};

Your program will be broken if you use Color::yellow and Color::blue and don't compile against to the updated header.

All the const and enums defined in the header files can be accessed from a separate compile unit. They are actually interfaces, part of the contract. As a library user, when you use an interface, you expect that the same interface doing the same things in all the versions of the library. Your application relies on that to function well. As a library author, you don't want to drive your user crazy. Don't change the public interfaces.

How to Mitigate It

It depends on your purpose. As a library author, if you just want to provide a well defined value to the library user, use a function to return the value. This will have some overhead in function call, but you can change the value in future version. In C#, you also can declare the variable readonly instead of const. Either way it won't become a compile time constant. Instead, the run time will read the value from your library and it still cannot be changed.

For enum, it'll be a little complicated. The first approach is that you always append the new enumerator at the end. Take Color as an example, instead of adding green in between red and blue, you always add the new enumerator after the last one: yellow. A second approach is that you always set explicit value to the enumerator as what we do to red. There are problems in collaborative work in a large team. There is no way to enforce a person to append to the last in the first approach. For the second approach, two people may use the same explicit value for new enumerators in their work, and they all check in at the same time. Comments in the code. That won't always help.

Remember, public consts and enums are interfaces. Don't change them. This is the best option to prevent them going bad.

Make Investment Works

From school, training, and work, we'll get deeper knowledge in software development. It's not uncommon we lack investment education since schools don't teach everyone, unless we happen to be in the related majors. I decide to write something about investment to reflect what I've learned by myself so far.

Financial advisor

A Financial advisor is a person that provides financial advice or guidance to clients. They can be insurance agents, investment managers, tax advisors, real estate planners, etc. Investment managers deal with their clients' investment portfolios.  It may be good to have your money managed by experts. But you have to be cautious in choosing a financial advisor. They will charge you fees for compensations. Some charges 1% - 1.5% of the assets they manage. Some charge by hour. Regardless your criteria, please add this: a fiduciary. A fiduciary has a higher ethical standard and is required to act on behalf of his or her clients benefits. That means, if the client had all the prerequisites and the information, the client would have taken the same action. A non-fiduciary is only required to make suitable and reasonable action, which may not be the client's best benefits.

Actively managed fund and passively managed fund

An actively managed fund is always attended by a manager, who is supposed to be an expert. They pick the stocks, bonds and other investment vehicles in the funds. A passively managed fund, on the other hand, doesn't requires a fund manager to pick for the funds. They usually match the index they follow. With less human intervening, it incurs less fees to manage the fund.

You may believe actively managed fund will perform well, since  an expert always keeps an eye on it and adjusts immediately to the market change. You remember the chart of the fund always looks good. For example in the screenshot, FSPTX outperforms the benchmark. (Note the chart is for illustration only. It doesn't mean the fund is good or bad). Is that the whole story? Do you notice the text below it? The performance data featured represents past performance, which is not guarantee of future results.

Example Fund Prospect

from https://fundresearch.fidelity.com/mutual-funds/summary/316390202

This study by S&P Down Jones Indices in 2016 shows that 90 percent of actively managed funds fail to outperform their index targets over the past one-year, five-year and 10-year periods. Why don't you find such funds from your fund managers? The funds can be discontinued. Fund managers don't want you to lose confidence.

One major factor in the under performance is the fee in actively managed funds. The cost can be the management fee, trading commission, etc. All financial advisers need compensations. It doesn't matter how well the fund performs. What really matters is how much you get after the fees.  Let's look at the fee of the fund FSPTX.

Example Fund Fee

from https://fundresearch.fidelity.com/mutual-funds/fees-and-prices/316390202

Its expense ratio is 0.77% and its Exp Cap (Voluntary) is 1.15%. The expense ratio is what you pay right now. Exp Cap is the limit of the fee you may pay in future, which means you might end up paying 1.15% fee. For example, you have $10000 in investment and the market value doesn't change in 10 years. If the expense ratio at 0.77%, you pay $10000 * 10 * 0.77% = $770, If the expense ratio is 1.15%, you pay  $10000 * 10 * 1.15% = $1150. That's $380 more for $10000, even when you don't have any gains. The fee of a passively managed fund can be lower than 0.1%.  I'm sure you see the difference. If you worry about the fee, you may be scared when you know Expense Cap may be terminated or revised at any time. You don't have control how much you're charged.

First, most funds cannot outperform the index. You already don't have too many gains. Second, you need to pay more in fees to have an actively managed fund. Passively managed fund most likely just mirror the index it's tracking. It doesn't requires too many fees to get similar performance as an index. An actively managed fund needs perform much better than a passively managed fund to give you the equal gains, after you pay the fees. Those make passively managed fund more attractive than actively managed fund.

Diversification and re-balance

Don't put all eggs in one basket. For example, if you only invest in one company's stock, your return of investment is the same as the company stock. When the company has a rough year, or even goes bankruptcy, you may lose everything. The best way to reduce the risk is to diversify. You can diversify within the same category. For example, you buy 25 – 30 unrelated stocks. Or you can diversify across all categories. You have everything in your portfolio: stocks, bonds, precious metal, etc. The theory is that the same economy data won't affect all of them in the same way. It may hit one stock or one category heavily but be neutral or good for another one. In the end the positive smooths out the negative. You'll need to decide how to allocate your investment, depending on you risk tolerance and investment goal.

What strikes me is re-balance. It has a twofold effect. First, it shields you from more risks. Second, it is a simple way to buy at the bottom/sell at the top. For example, you have a portfolio with 80% stocks and 20% bonds. If the stocks grow to 90%. That may mean the stock is going up. Because it has a cycle, it'll going down some day. If you keep investing in stocks and the stock market crashes, you'll lose a lot. Why not re-balance and have more in bonds? You may sell some stocks and buy more bonds, or invest more in the bonds. In the end, you will have 80% in stocks and 20% in bonds. That may not be the top when you sell the stock. But who knows. Don't try to time the market. You're reducing the risks and having some gains.

Few can guarantee you can get in in and out of the market at the right time. But there are many ways to reduce the risks. Diversification and re-balance are good tools in your risks management.

 

Lump-sum vs recurring

Do you save the money until it's large enough before you investment? Or do you set aside smaller money in your paycheck and invest every month? The first approach is lump sum and the second one is recurring.

They both have advantages and disadvantages. For lump sum, if you buy when the market is at the bottom, you'll get much larger gains. Every rise will count as your gains afterwards. Remember to sell them at the right time too. You need much luck in both time. If the market crashes after you buy, you need to wait a long time before getting even. The second, regardless how the market is, you keep investing even with smaller money. For some money, you may lose. For some other money, you get gains. When you spread out investment like this, you're also reducing the risks. Lump sum can give you much higher return when you do that at the right time. Recurring on the other hand, can reduce your risks of timing the market wrong.

 

I've been talking most about avoiding the fees and reducing the risks. What really makes it works is discipline. Once you decide how to allocate your investment, how to re-balance, and whether lump-sum or recurring, don't let good or bad news in the market sway you. You need to research and analyze before you make any changes. Try to do it rationally, not emotionally. The best way to get more gains is to time the market right. That's almost impossible. Luck is much more important than expertise. The second best way is to reduce risks. Make sure you don't lose money before you get gains.

Make Your Website Accessible

In my previous post, I talked about why accessibility matters. Here are some tips to make your website accessible.

Use HTML semantic elements to structure your website

Semantic elements are the elements with meaning. Screen readers can describe the meaning to the user, and navigate the elements accordingly. Don't use semantic elements for layout only purpose. For example, <table> can be used in layout to show unrelated data in some particular way. Don't do this. Screen readers will try to interpret them as rows and columns in a table and that will confuse users. On the other hand, if your data and their relation have some form of meaning, use semantic data to organize them. For example, we can use <div> with CSS style to make some text look like heading. If the the text is actually in heading, we need to use heading elements such as h1, h2 and h3. Use semantic elements to help screen readers and only use them when the data meet the semantic of the element. Some other semantic elements are <header>, <nav> etc.

Add alt to <img>

A picture is worth a thousand words. I know the images are important. They're funny. They're telling a lot of stories. Some of them can only be conveyed via a picture. Think of this. Not all users are able to see the picture. As you know, the technology right now cannot analyze a picture and say what is on it precisely. Not to mention it must be in the context of your website. Use the alt attribute in the <img> element. Give it some concise description of the image. The screen reader can read the text to the user. Please help the user to understand what you try to say with a picture.

Support keyboard navigation in your website

Don't assume all the users always use the mouse in your website. No. Come up with a good way to navigate your website with only keyboard. The most important key is the TAB. It should navigate the next logical element on your website. If your web page is a form, users should be able to use tab to go through all of the input fields. If you have many different sections in your web page, use some form of navigation bar to help users to go to each section quickly.

Use WAI-ARIA

If you can use semantic elements, that's great. But if you find yourself in a situation that semantic element isn't an option, follow WAI-ARIA. It contains a set of aria-* attributes you can use to describe your information to users with disabilities. Some common and important attributes are aria-label, role, etc. Here are the list of aria-* attribute.

Test with Screen Readers

I've mentioned screen readers above. Please test your websites using a screen reader. Listen to what the screen reader tells you. Not to look at what's presented on the website. This will give you a sense how users with disabilities understand your website. Similar to a user study, try to find out what the gap is for a person with disabilities and close that. There are Narrator on Windows, Jaws etc. Play with them and get familiar.

 

Hope you find some useful information from this article. We can definitely do some small things to improve the whole experience for those with disabilities. For example, add alt to img, use semantic elements or aria-* attributes and test with a screen reader.

Why Accessibility Matters

The technology has prevailed our daily life. We enjoy the convenience by the technology. But a large portion of population with varied forms of disabilities may not feel the same way, because the applications are not accessible. Accessibility matters because it helps more people to use the application and bring convenience to all of us.

Accessibility is Important to Users

According to US Census Bureau, 1 in 5 people in the US have a disability. (https://www.census.gov/newsroom/releases/archives/miscellaneous/cb12-134.html)
Around the world, about 15% of the world population have some form of disability (https://www.disabled-world.com)
Even for those who don't have any issues when they're young, they may develop when they age. They don't see well, have difficulties in moving around and have shaking hands.
Yes. That is terrible. We can help them. But I'm a developer. I'm just writing code. It's not related to my work. What can I do to help them? I'm glad you ask. It's related to our work. It's more and more important to create applications that are also accessible to the people with disabilities. Computers, Internet and mobiles bring convenience to our life. For example, we don't need to go to a bank for basic banking. A lot of government related activities can be done online. A few weeks ago, I filed a report of a stolen bike to police department online and got acknowledged the next day. The technology has saved us many trips. The applications we write has benefited so many people. If we are not aware of it, we may unintentionally leave behind some people that have some forms of disabilities. It's important to keep that in mind when we develop applications. We need to make our application accessible to all.

Accessibility Adds to a Good UX Design

Speaking of good UI, we always think of elegant UI layout, attractive colors and various fonts, which is also what we interact with most of time. When I learned User Interface Design at the university, I also thought of creating good UI for our eyes until I realized how important it is for us to create accessible applications. Accessibility is an important part in UI design. A good design doesn't only mean fancy UI. The first and most important principal of a good design is that it's usable. I remember the cover of the book The Design of Everyday Thing. I wouldn't argue the teapot looks good or bad. It is definitely not usable. Then what's the point to create such a thing for people to use? That's what we'll get if we don't pay attention when we develop our applications. When we keep accessibility in mind, we start to focus on the core work flow of our application. It not only helps people with disabilities, but also improves experience for those without disabilities. The application will focus on the work that it is designed for. The UI will be simplified and has less distractions.

How Accessibility Benefits Users

It's more powerful to watch how vision impaired users to use applications. The least I don't expect is that they are playing games. That’s amazing.

Below is a short documentary about how a blind person loves to play games.

Gaming Through New Eyes - Award Winning Short Documentary - Blind Gaming

There are indeed many obstacles to them in games. But we can make games much better for them.

Conclusion

There are already many applications that are not accessible. We cannot change all of them at once. What we can do is to design with accessibility for new applications and to fix with accessibility bit by bit for old ones. I understand that we always have limited resource, tight schedule and complicate work. But little by little, we can make the world a little better.