My First Cordova Project

I developed a mobile app recently. VS is an excellent tool in managing the project and debugging. I would like to start from setting up the project in VS, followed by my experience in my mobile development journey.

The app is a bookkeeping of your investment. Your investment can be in the brokerage account, retirement account or HSA. Besides bookkeeping, it also calculates the rate of investment.

First of all, I use Visual Studio 2015 Community with Apache Cordova Tool. I chose to install Windows phone 10 SDK. You can install the Android emulator and the Android SDK. It's up to your target platform.

VS Installtion screenshot

Note that even though you just want to target Windows 10 and Windows Phone 10, you still need to install Windows Phone 8.1 Tools and SDKs. Otherwise you won't be able to debug it on your Windows 10 Phone. It complains that bin/arm/dbghelp.dll is not found and it comes with Windows Phone 8.1 Tools.

Framework can improve productivity and accelerate development. It provides the backbone of the application and glue all major parts together. I just need to follow the pattern and fill in the parts. I chose Ionic framework.

What UI is suitable for this app? Ionic provides some templates: simple, tabs and side menu. I think side menu provides the largest real estate on the screen, while flexible in navigation.

It's easy to create an Ionic project in VS. In the "New Project" dialog, search Ionic in the Online tab, then download the one you like.

VS search Ionic template screenshot

After the template is downloaded, you need to re-open "New Project" dialog, and Ionic project templates are in the Installed tab.

VS new Ionic project screenshot

Since it's bookkeeping, it needs to persist the data. Sqlite is suitable in the mobile application. I found a plugin at GitHub (https://github.com/litehelpers/cordova-sqlite-ext.git).

In VS, open config.xml (View Designer), then add a custom plugin. Use the git repository and install it. VS Cordova add plugin screenshot

So far, I've set up the project.  The journey has started.

Setting Up CSF on VPS

I recently noticed my VPS is slow. The blog takes a long time to show me any content. When I log in vis SSH, I usually don't see what I type in immediately. The command is echoed a few minutes later.

I started my investigation. I ran 'top'. CPU doesn't run crazily. Memory isn't loaded heavily. There's no suspicious process. Everything looks good. But the connection is still slow! I asked the VPS provider. Hey, why is my VPS slow to respond when CPU and memory aren't overloaded? The support team took a look and responded to me with one IP address 191.96.249.54 that established many connections to my VPS. I then searched the IP address online and found this StackOverflow question (http://serverfault.com/questions/778831/how-to-block-an-attack-on-wordpress-using-ufw-on-ubuntu-server). It's an attack! Somebody suggested to use ipsest to add a blacklist. Probably that's the way to go, because when I looked at Apache log, there's also connection from 191.96.249.53. They're from the same organization.

I found the solution. I installed ipset to find out that it didn't run. I got this error:

ipset v6.11: Cannot open session to kernel.

Some articles online say it needs kernel patching. But I don't control kernel on my VPS. So I contacted my VPS provider again. We couldn't just upgrade the kernel. But they suggested me an alternative solution: CSF (http://www.configserver.com/cp/csf.html).

CSF can do what ipset does. I just need to use the blacklist. Besides, it can monitor user log in and inform you when somebody tries to log in in a short time to figure out the password. There are other protections too. All are documented in the readme file. The configuration file contains explanation for all options. It is a lot of information to digest and you have to make your own choice. I'll just put down some basic points:

  1. Set TESTING to 1 first. The value 1 will have CSF stop itself 5 minutes  after it's started. So when you accidentally lock yourself out, you still can gain access to your VPS after at most 5 minutes.  Remember to set it back to 0 when you've tested your configuration file.
  2. TCP_IN and UDP_IN are the ports that incoming connections are allowed to connect to. At least, put your SSH port number in TCP_IN so that you still can access  to your VPS.
  3. On my system, CSF can't find /usr/bin/host. It's in its log file. Remember to check the log file for any errors and fix them.
  4. csf.deny contains a list of IPs from where the incoming connection are dropped. It supports CIDR notation (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation). I put the suspicious IP addresses in this file.

After CSF starts to run. There is no connection from the suspicious IPs any more from Apache log file. The connection to my VPS is fast again.

Translate in AngularJS

I'm using angular-translate and angular-translate-loader-partial to translate an AngularJS application. angular-translate provides directives, filters and a service to get texts translated. It provides a few ways to load translation files. angular-translate-loader-partial is one loader that allows you to load the necessary translation files when you need them. This is what I'm using right now. The document from https://angular-translate.github.io/ covers all of them. But there's a gap in the end to end scenario. The translation fails.

Here's the summary of what need to be done from the document:

    1. Tell the translate provider to use the partial loader and the translation file path pattern.
      $translateProvider.useLoader('$translatePartialLoader', {
          urlTemplate: '/i18n/{part}/{lang}.json'
      });

      urlTemplate tells the loader how to find the translation file. In this example, it's in /i18n/ directory. {part} will be provided later. {lang} is the language you use. It's set later too.

    2. Set the preferred language and fall back languages in your module configuration function. e.g.
      $translateProvider.preferredLanguage('zh_CN').fallbackLanguage('en_US');

      That tells the translate provider to look for translation for Simplified Chinese, and fall back to English if it's not found. This will set the {lang} for the loader.

    3. Load the translation file and refresh translation tables. This is done by
      $translatePartialLoader.addPart('contact');
      $translate.refresh();

      The translation file for 'contact' will be loaded. It sets the {part} in urlTemplate. So the translation file path is /i18n/contact/zh_CN.json.

      $translate.refresh() must be called every time a part is added. The document also suggests to refresh translation tables in event $translatePartialLoaderStructureChanged.

      app.run(function ($rootScope, $translate) {
        $rootScope.$on('$translatePartialLoaderStructureChanged', function () {
          $translate.refresh();
        });
      });

That seems straightforward. Everyone can add those code and start to use directives, filters or the service to translate their own applications. But that won't just work.

Both $translatePartialLoader.addPart() and $translate.refresh() are asynchronous. When you use directives, filters or the service to ask for translation for a given id, the translation table may not be loaded! This is what happens in my application. The document doesn't warn it, and doesn't present a solution either. The solution is simple, however. Both loadPart() and refresh() return promises. You have to ask $translate to get the translation after the promise refresh() is resolved.  So you can do something similar to:

$translate.refresh().then(function() {
    $translate('TRANSLATE_ID').then(function(data) {
        $scope.SomeLabel = data;
    });
});

This is tedious. Every time when you try to translate something, you have to double check whether the translation table is ready. And you cannot use directives and filters. Because when the translation table becomes ready, it won't update the directives and filters. Here's an example to demonstrate the solution: http://plnkr.co/edit/MrliEBMed17f3yqeRqz3

angular-translate is flexible in loaders and how to do translation (directives, filters or services). But there are some undocumented limitations (or bugs). When you use partial loader, you cannot use directives and filters. But at least, there's always another way of doing it. angular-translate still doesn't fail at getting the work done.

C# Variant on Struct

In C#, covariant and contravariant allow implicit conversion for generic type arguments. But they're not applied to struct or other value types including primitive type (except string) and enum.

Only an interface declaration can contain covariant and contravariant generic parameters.

A covariant allows to assign an object that is instantiated with a more derived type argument to an object that is instantiated with a less derived type argument. It's declared with keyword out with the generic type parameter. For example, this declares an interface with a covariant generic parameter: interface ICovariantInterface<out T>.

A contravariant does the reversed. It allows to assign an object that is instantiated with a less derived type argument to an object that is instantiated with a more derived type argument. It's declared with keyword in with the generic type parameter. For example, this declares an interface with a contravariant generic parameter: interface IContravariantInterface<in T>.

There are some built-in covariant and contravariant generic interfaces in C#, such as IEnumerable, IComparable.

Usually I would think that covariant in analogy to class hierarchy. It's similar to that a derived class can be assigned to a base class/interface. For contravariant, it's done in the opposite direction.

But we cannot use struct or other value types in covariant and contravariant. Both covariant and contravariant are applied to reference types. And reference types and value types have different memory layout.

So this is not allowed: IEnumerable<object> baseGeneric = new List<int>();.

Customize IntelliTrace to Collect Enums

This Guillaume Rouchon's post illustrates how to add a custom IntelliTraece event. Mine is to complement it with an enum type and updates in Visual Studio 2015 Update 2.

Let me summarize how to add a custom IntelliTrace event. The file you have to change is in a sub folder of Visual Studio installation folder, <VS installation folder>\Common7\IDE\CommonExtensions\Microsoft\IntelliTrace\14.0.0\en\collectionplan.xml. You need to have admin privilege to do that. This file only affect F5 debugging in Visual Studio. If you are using IntelliTrace standalone collector, you have to pass the collection plan file to the command line.  So you can modify default one that comes with the standalone collector or write your own one.

What you need to do to add a custom event is

  1. Add a category under TracepointProvider/Categories
  2. Add an assembly under TracepointProvider/ModuleSpecifications
  3. Add your own diagnostic event specification under TracepointProvider/DiagnosticEventSpecifications
There are two ways of querying parameter values and returned values. The first is DataQuery. It's a simple declarative way using dot notation to get values of the parameter or the field in the parameter. It only supports primitive types. The other way is ProgrammableDataQuery. You have to provide your own IProgrammableDataQuery implementation to query a complex type.
For enum, we're going to use DataQuery.
We need to specify a type in the data query for enum. Intuition tells us we can use the enum type. But that's not working. The right type should be the underlying integral type. For example, int. The value is also an integral value too.
Let me illustrate it here.
I create a sample console application
using System;
namespace ConsoleApplication1
{
    enum Color
    {
        Red,
        Green,
        Blue
    }
    class Program
    {
        static void Main(string[] args)
        {
            PrintColor(Color.Blue);
        }
        static void PrintColor(Color color)
        {
            Console.WriteLine("Color: " + color);
        }
    }
}

Let's change the collection plan.

1. Add a new category.
<Category Id="enumSample" _locID="category.enumSample">Enum Sample</Category>
2. Add a new module
<ModuleSpecification Id="sample">ConsoleApplication1.exe</ModuleSpecification>
3. Add a diagnostic event specification
We're going to query the parameter color in the method PrintColor.

<DiagnosticEventSpecification>
    <CategoryId>enumSample</CategoryId>
    <SettingsName _locID="settingsName.ConsoleApplication.PrintColor">PrintColor (Color)</SettingsName>      
    <SettingsDescription _locID="settingsDescription.ConsoleApplication.PrintColor">
       Print the color
    </SettingsDescription>
    <Bindings>
        <Binding>
            <ModuleSpecificationId>sample</ModuleSpecificationId>
            <TypeName>ConsoleApplication1.Program</TypeName>
            <MethodName>PrintColor</MethodName>
            <MethodId>ConsoleApplication1.Program.PrintColor(System.Int32):System.Void</MethodId>
            <ShortDescription _locID="shortDescription.PrintColor.Color">Print Color "{0}"</ShortDescription>
            <LongDescription _locID="longDescription.PrintColor.Color">
                Going to print the color in integral value "{0}"
            </LongDescription>
            <DataQueries>
                <DataQuery index="0" type="Int32" name="color" query=""
                    _locID="dataquery.PrintColor.Color" _locAttrData="color"></DataQuery>
            </DataQueries>
        </Binding>
    </Bindings>
</DiagnosticEventSpecification>

Save the file and restart VS. The custom event shows in Tools/Options/IntelliTrace/IntelliTrace Events

IntelliTrace Event Options screenshot

When I debug the program, the custom event appears in the Events table. Look at the description! It shows me the integral value of the enum value.VS Events tab screenshot

How simple it is to add a custom IntelliTrace event! This is helpful to diagnose a piece of code path that're prone to error in changes. When you have your own collection plan for the program, you can just debug the program and let it run, and spot the error from the Events table.

Run Windows Phone 10 Emulator on Compressed Disk

If you are like me, you may have your disk compressed and also try to do some Windows Phone development. I'm starting developing for Windows Phone 10 and have this problem:

The emulator is unable to create a differencing disk

The following screen shot explains that virtual hard disk files must be uncompressed and unencrypted and must not be sparse:

Windows Phone Emulator Error

 

Yes. I do have the whole disk compressed. So as long as I uncompressed the virtual hard disk files, I should be able to run the emulator. But unfortunately, the message box doesn't tell me what the files are and where they are.

So I searched and found a few results. Most aren't helpful. They're for emulators for Windows Phone 8 (e.g. https://msdn.microsoft.com/library/windows/apps/ff626524(v=vs.105).aspx#BKMK_uncompressed). I looked at the paths and uncompress C:\Users\username\AppData\Local\Microsoft\XDE. But I didn't  find the one for Windows Phone 10.

After a few more searching, I found the location for Windows Phone 10 VH:

C:\Program Files (x86)\Windows Kits\10\Emulation

That's it. After uncompress both directories, I could debug my app on Windows Phone 10 Emulator.

Grep in PowerShell

When I start to use PowerShell, I miss grep in bash. Luckily, PowerShell provides Get-ChildItem and  Select-String.

Some helpful parameters in Select-String are -Pattern and -Path.

Both -Pattern and -Path accept a list of string. Each item is separated by comma, for example, *.txt, *.log

-Pattern is the pattern of text you want to search

-Path is the list of files you want to search.

However, Select-String doesn't search files under a directory. You have to pass file paths to it via -Path. We can use Get-ChildItem to get a list of files, and get all files under a directory recursively by using -Recurse

The basic pattern to use both cmdlet for greping is as follow:

Get-ChildItem -Recurse -Path C:\Path\To\Folder -Include *.txt | Select-String "SearchText"

I create a script in my GitHub (https://github.com/kceiw/PowerScript/blob/master/scripts/GrepShell.ps1) so that I can reuse it.

This script is not signed though. If you want to use it, you need to change your execution policy to allow it to run. See Set-Execution.

Update Owncloud

I recently updated my Owncloud to find that I couldn't access to contacts and calendars from WebDav any more. And after tried a few things, it worked again.

At first when I tried to open the link from the browser, I got this error:

User did not have the required privileges ({DAV:}read) for path "principals/username”

I searched online and found the issue open against owncloud on GitHub. https://github.com/owncloud/core/issues/14048

The workaround mentioned in the issue was already in my Owncloud. But I still had the same issue. I couldn't go further since there's no more workaround and the issue was closed. I thought there must be some errors in upgrading Owncloud, because it's in maintenance mode for quite a long time. But anyway, I got a message about a newer version when I logged in as Admin. I decided to upgrade it manually. It wouldn't be worse.

After I'd done that, the app files_encryption for server side encryption didn't work. From the log, it's about using an undefined function. I removed it from the apps folder since I didn't really need it.

After that, when I logged in to admin, there's a message in my admin page telling me to run command "occ encryption:migrate". I did so. Surprisingly, I could sync to both contacts and calendars.

Dynamically Adding VS Menu Item

We can extend VS by using VS SDK. It provides a way to add a menu and menu items. When we do that, we may need to solvee cases where we only determine the menu items at run time. That requires us to dynamically add menu items. Luckily, there's a document and example in MSDN about how to do that (http://msdn.microsoft.com/en-us/library/bb166492.aspx). The approach seems straightforward. However, when I tried that out, I fell in a tricky trap, in my opinion. That took me a few days to figure out. So I want to write it down.

Let me summarize the approach:

1. Create a place holder menu item in the .vsct file. This menu item must have the CommandFlag DynamicVisibility. Other than that, it is similar to other menu items in the .vsct file. It is in a group and a menu, and it has its own guid and id.

2. Create a menu command class that inherits from OleMenuCommand. Override method DynamicItemMatch.
The place holder menu item is always a valid menu item. Then DynamicItemMatch is called for each consecutive id until it returns false.

3.  Add BeforeQueryStatus and other handlers for your menu items.
You can set status, text and other properties of the menu item in BeforeQueryStatus handler.

Here are my lessons.

1. The place holder menu item is always a valid .vsct menu item and shown in the menu, because it's declared in the .vsct file. DynamicItemMatch doesn't really determine it's validity. If you return false in this method, the framework just doesn't query for next id.

2. Be cautious of the id of the place holder menu item and the number of menu items you want to create. You don't want the ids of dynamically created menu items to overlap any one you use for other menu items. It can be out of control especially when you have multiple areas that you want to dynamically add menu item, and all of them are in the same menu and group.

3. DynamicItemMatch is just a way to let the framework know when to stop querying for dynamically generated menu items. You still need to set the right status and text for the generated menu item in BeforeQueryStatus handler.

4. We cannot rely on MatchedCommandId to identify a dynamically created menu item.

In the example in the MSDN document, MatchedCommandId is set to the command id if it's a match. In BeforeQueryStatus, it's used to identify a menu item and gets reset afterwards. But it's not used in the invoke handler. I didn't understand that at first. I didn't reset MatchedCommandId in BeforeQueryStatus handler, and use MatchedCommandId in the invoke handler. And when I click the first menu item, it's actually invoked on the last generated menu item. After a few debugging, I realized that all those dynamically generated menu items and the place holder menu item are actually the same instance. That means, the framework creates an instance of OleMenuCommand for the place holder menu item, and calls DynamicItemMatchBeforeQueryStatus handler and other handlers on the same instance. Once you set a property of that instance, it remains there. For example, MatchedCommandId will keep the last assigned value. Don't need to worry that dynamically generated menu items get weird text. It'll always call BeforeQueryStatus handler before setting the text or other status. Thus you always have a chance to set it to the right value for that particular menu item in the BeforeQueryStatus handler.

The approach in Visual Studio 2012 (https://msdn.microsoft.com/en-us/library/Bb166492%28v=vs.110%29.aspx) creates an instance for the place holder menu item, as well as an instance for each generated menu item. However, I believe that's not the right approach.

1. The framework still calls DynamicItemMatch of all the created instances to determine whether it should query for the next id.

2. The document of DynamicItemMatch indicates that it's used in the case where we dynamically add menu item. The approach in Visual Studio 2012 doesn't use it at all.

The above are what I learned. And since I have a data structure to represent the menu item, I don't need to use MatchedCommandId. I am not quite sure yet the usage of it. But at least, the Visual Studio 2015 approach still works and you should try to debug it if you see any problems.

Use Grunt to Develop a Web Application

Sstart web development recently and use Grunt to run tasks automatically. Once the the configuration file Gruntfile.js is set up, everything will run smoothly. It's straightforward to get started. And there're plenty of plugins that just make life much easier. I'm using jshint, htmlhint, less, watch. They're providing grunt task and you can use them out of box. Besides these plugins, you can also use any npm modules as you use them in your regular JavaScript file. I don't plan to repeat the setup of Grunt and the tasks I list above. I'm going to write down some changes I make to meet my requirements, as well as something that is worth noting.

1. Only compile changed files.

This is done by the task watch and there's an example here https://github.com/gruntjs/grunt-contrib-watch.

var changedFiles = Object.create(null);
var onChange = grunt.util._.debounce(function() {
    grunt.config('jshint.all.src', Object.keys(changedFiles));
    changedFiles = Object.create(null);
}, 200);
grunt.event.on('watch', function(action, filepath) {
    changedFiles[filepath] = action;
    onChange();
});

It only checks changes in JavaScript files. Below are the problems I need to solve.

1. Watch changes on other files.

2. Handle the case when a file is deleted.

Let's first look at the parameters that are passed to the event handler for watch. they're action, filepath and target. The parameter target is listed in the above example. It's the target in the task watch which will execute when the file filepath is changed. And the action is "added", "changed", "deleted". And below is the solution I used for solving the above problem.

1. Record the target.
Use a dictionary of a target to a list of changed files instead of the other way as in the example. That is:

grunt.event.on('watch', function(action, filepath, target) {
    var filesForTarget = changedFilesForTargets[target];

    if (filesForTarget === undefined || filesForTarget === null) {
        filesForTarget = [];
        changedFilesForTargets[target] = filesForTarget;
    }

    filesForTarget.push(filepath);
}

Then in onChange, I iterate through the keys in changedFilesForChanges, and then change the corresponding task/target's file paths.

var targets = Object.keys(changedFilesForTargets);
for (var i = 0; i < targets.length; ++i) {
    switch (targets[i]) {
    case 'target1':
       // similar to the example.
    break;
    // more targets.
    }
}

Note the target name here is the target you set in the task watch.
That's done for adding/changing any file.
2. Handle the action "deleted".
I have all source files (js, less, html) in one folder (let's say src/). And need to compile and copy the result files to another folder (let's say build/). When I delete a file in src/, I also want to delete it from build/. When the action is "deleted", I add the file path to a different list. And in onChange(), I compute the path in build/ and delete the file there. I use fs.unlinkSync() to delete a file. fs is an internal module in node.js

2. File path separator

I want to have it work cross platform. And I want to only copy the changed files from source folder to destination folder. I also need to copy files in other scenarios. So I need to build my own path. I found some cases where separators are mixed on Windows. '/', '\' are used in the same path. I ended up using a node.js internal module path to build the path.

3. Work on cygwin

This may be the trickiest one. I used to run grunt watch in cygwin. When I deleted a file, the watch task didn't start. I tried to debug a couple times and didn't understand it. But when I tried the node.js environment using Windows console. It picked up the deleted changes. So I just used the Windows command line now.

4. Log and debug

I log verbose message with grunt.verbose.writeln() and run commands with option "--verbose" to see more information.

I'm glad I found a task grunt-debug-task. After installed, I can run "grunt debug task:target" to start debugging. It requires node-inspector to run. It will start the Chrome browser and you can debugger your Gruntfile.js like a client side JavaScript file.

The above are what I do and what I learn during my project. It's always good to practice to learn. Grunt is working as expected in my project. Next will be to make it work with cordova and require.js optimization.