Concept – Technology – Application – The 3 Pillars of Technical Study

Over the course of our experience teaching thousands of engineering students and professionals and our interactions with several stake-holders, we have heard several concerns from students, faculty members and businesses alike. There are some key patterns in the concerns raised and one of them is about what Engineering students should learn in their curriculum. In this post we make a humble attempt to answer this question with a seemingly simple but effective approach that students can take for guaranteed success.

Table of Contents

  1. Problems faced by various stake-holders
  2. Our reponsibility as Engineers
  3. The Missing Gap in Technical Skillset
  4. Conceptual Depth
  5. Technologies – the missing piece in the puzzle
  6. Being successful as an Engineer
  7. The Golden Mantra
  8. A Technology Ninja?
  9. Teaching to build and learn

Continue reading “Concept – Technology – Application – The 3 Pillars of Technical Study”

TAME – Key Features – Schema and Entities

In the previous post we discussed the features that we would like to have in a modern day MVC framework. In this post let us discuss some of the key problems in designing such a framework and how we have approached it in TAME.

Data is stored in storage devices as bits. However humans don’t quite understand this, so have come up with mechanisms to store and retrieve data from a storage device using a high level interface. This high level interface is the notion of files and on top of files, we have built the notion of databases.

A database makes this storage and retrieval more user friendly rather than working with raw files. Databases are stores of “entities”. Entities may be related to other entities. Interaction with the database involves performing CRUD operations on the entities.

Entities contain fields. Fields have data types. Data types can be broadly categorized as “primitive data types” and “composite data types“. When considering relational databases, we normalize data into primitive data types. Composite data types are handled using primary and foreign key relationships across multiple tables. However, in a NoSQL store, we may store composite data as a sub-record within a single record itself and save an additional join.

So we start our discussion by asking, what are the minimum set of primitive and composite data types we need? If we do a comparative analysis of different programming languages and database technologies, we will soon realize that almost all languages have some way of representing strings, numbers and booleans. There are many other primitive types, but these are the minimum. The reasoning behind why this is so, is beyond the scope of this post.

Composites are ways of grouping data together. Composites are made up of primitives and other composites. At a minimum, we need one composite data type that has sequential properties (list like) and at least one that can be used to store non-sequential (key/value) form of data (map like).

Continue reading “TAME – Key Features – Schema and Entities”

Rethinking MVC Frameworks

In the previous post I explained the need for an application framework that allows us to evolve the functionality as our startup processes evolve. In this post, we will go into the technical details of what we expect from such a framework.

Almost all server side MVC frameworks provide a standard set of functionalities:

  • A way to receive a HTTP Request from the client, examine the Request and determine what the user wants to do. A URL mapper transfers control to a Controller which then does the request processing.
  • The controller fetches data from backend services (eg: databases).
  • Along the way, data transforms from relational form to object form with the help of an Object Relational Mapper (we call this a model).
  • The model data is then transformed into a view with the help of templates.
  • Several other functionalities are provided by mature MVC frameworks – sessions, caching, REST endpoints etc.

In the last decade there have been some trends that are changing the way we work with MVC frameworks. For one, clients are becoming more intelligent and thicker (logic moving from server to client where possible). This has resulted in the emergence of client side MVC frameworks.

Continue reading “Rethinking MVC Frameworks”

Scaling processes for a growing startup

Jnaapti completed 6 years this May. What started as a 1 person company in 2011, is now a growing business with more than a dozen people working on different aspects of the business. We have conducted training in over 35 organizations in over 30 technologies in the last 6 years.

Like many other tech startups, I started off writing the entire code for our main product, The Virtual Coach, single handedly. The first version of the product took me 2.5 months to build from concept to deployment.

We are now a team of 8-10 engineers working on different aspects of engineering. We now have multiple products in the education space and are targeting multiple customer segments.

When it was just one or two people, we didn’t need a formal project management process. It doesn’t mean a process didn’t exist; rather, the process was in the heads of the individuals who were part of the development team.

If 2 of us are programming, we can easily share tasks, arrive on deadlines and decide our commitments without requiring any elaborate processes as long as we trust each others’ commitments. We may resort to using simple tools like Spreadsheets or may even use pen and paper or a whiteboard.

However, as the organization grows, the number of cross-communication in the team increases. If every person communicates with every other, there is chaos and this soon starts reflecting in the overall efficiency of the organization. This is not a new problem and has been elaborately discussed in some excellent literature.

Continue reading “Scaling processes for a growing startup”

The Jnaapti Journey – Virtual Coach

What it is

While the idea of Virtual Coach has always been in Gautham’s mind since the inception of jnaapti, a startup does not have the luxury to build the entire vision and then launch. Rather, we have to start building the minimum set of features which we can validate with our users and then iterate frequently and let usage and metrics drive the product’s evolution. Jnaapti has been doing this ever since the launch of the first version of the product.

The vision of “Virtual Coach” is to eventually replace a human coach with a “Virtual” coach but in such a way that the learner does not even realize that he/she is being coached by a virtual entity. It follows the Turing test for Artificial Intelligence, but applies to learning and coaching. While we have started with software technology training, we intend to build a generic platform that has the ability to teach anything (any skill) to anyone (no bias) starting from anywhere (irrespective of what they know today).

The problem that keeps us awake at night is, “How do we scale good quality coaching so that it reaches the maximum number of people?”

Technical details

Version 0

The MVP of Jnaapti’s Virtual Coach was not a “product” at all. 🙂

The initial idea of the training process was tested out using Email as a form of communication between the learners and the coach. As Gautham coached people, he carefully noted down the pains of using Email as the medium of communication. Learners, especially students, didn’t follow email etiquette. So while Gautham was busy downloading the attachment from the solution email, and composing his review, students would send new emails asking him to discard the old solution and consider this new one. The requirements for the first version of the product was to replace Email as the mode of communication and have the interaction around an activity grouped under a single context.

Version 1 – 2011

The first version of the product was built single handedly by Gautham using Web2Py. It was a server-heavy product. There was minimal Javascript used and a lot of the backend data entry was done using ready-made views that Web2Py provides out of the box. The entire conceptualization, design, development and deployment of the product took 2.5 months.

The first version of the product was called the “Jnaapti Virtual Learning Environment”, later renamed to “Virtual Coach”.

Continue reading “The Jnaapti Journey – Virtual Coach”

A Docker Based Development Environment

At jnaapti, we have been using containerization technologies since almost the very early days. In 2011, when I was evaluating a solution to provide light-weight containers to our learners in the Virtual Coach, I was told that the only mature solution was to use Virtualization solutions. But Virtualization was too slow (in terms of boot up time) and I didn’t have enough resources to keep stand-by nodes running all the time. So after some evaluation, I decided to use LXC and it served its purpose. However, there were several features missing and I was in the verge of building a few of them myself.

So it’s not surprising that when I discovered Docker, I just fell in love with it. One of the first things we did was we moved our LXC based learner containers to Docker. We then slowly started to migrate portions of our infrastructure to Docker. We achieved full Docker migration last November, and then also moved our staging/testing systems in the cloud to Docker.

The last in the list was to migrate our development environments. The initial migration wasn’t too hard, because of 2 things:

  1. We already had Docker in production – so it was a matter of working off our production Dockerfiles
  2. We were already using KVM in our development – so we already had a clear idea about what containers our system should be running.

Why the move away from KVM based development environments? Simple! KVM’s disk usage is too much to suit our requirements. We had our development images of 5G each and you have 10’s of such images and are soon out of disk space. I am not sure how many Docker fans will approve some of the things I discuss here but I believe that this is a much leaner solution than using Virtualization and I don’t see any issues in the way I am doing it.

So with the migration from KVM to Docker, there were a few additional things that I wanted to handle:

  1. Can we use Desktop tools like text-editors with data in Docker? Imagine I am writing a NodeJS application. I want to use Atom installed in my host to write code. However, I want to run NodeJS inside the container.
  2. It is very easy to work with command line utilities (which don’t need X) in containers, but how about Desktop utilities like Eclipse? Can we run this in a Docker container and still have the same user experience as a regular app? What are some best practices to do so?
  3. Is it possible to expose devices into Docker containers – this is required for eg, if we are doing Android development and want to debug our app in an actual Android device

The first 2 were rather easy and I sailed through it. The third one, I struggled a little, but I finally made some head-way.

This post attempts to capture some of my learnings in this entire process in case you want to build a similar environment. So let me answer these questions:

Using Desktop tools with Docker containers

This one is easy. A practice we follow is all the code that we write is inside a mounted volume. Containers are used to run processes in a contained fashion, but the processes are manipulating files that are in our host (and not in the underlying diff file system). We can as well remove our containers and we don’t lose anything.

Here is a sample run to demonstrate this:

Start a Docker container that has NodeJS installed in it. Make sure that this container has access to a local host directory (in this case /home/gautham/Desktop/node-data):

gautham@ananya:~|? docker run -d -P -v /home/gautham/Desktop/node-data:/data --name "node-example" ananya-nodejs:0.0.1
820e105db5061b380e6117e42a0cabad5f00c54e54f5016aefc18399e2a2eb25

Check if the container is running

gautham@ananya:~|? docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
820e105db506 ananya-nodejs:0.0.1 "/usr/sbin/sshd -D" 9 seconds ago Up 8 seconds 0.0.0.0:49155->22/tcp node-example

Using Atom to edit files in a volume shared with a Docker container

Inside the container (which is accessible via SSH):
gautham@ananya:~|? ssh -p 49155 ubuntu@localhost
ubuntu@localhost's password:
Last login: Tue Apr 14 14:13:12 2015 from 172.17.42.1
ubuntu@820e105db506:~$ cd /data/
ubuntu@820e105db506:/data$ ls
hello.js
ubuntu@820e105db506:/data$ node hello.js
Hello World!

Using Desktop tools inside Docker containers

This one initially seemed a little difficult, but I figured out soon.

Create an image that has lxde-core package installed in it.

Now, there are 2 options:

Connect to lxde running inside the Docker container

Run these commands in host:

docker run -d -P ananya-desktop:0.0.1
sudo su
xinit -- :1 &

This will now switch you to a different terminal (accessible at Ctrl+Alt+F8). You will also see a white terminal. Type the following in this terminal to start LXDE:

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38280dd66b98 ananya-android:0.0.1 "/usr/sbin/sshd -D" 4 minutes ago Up 4 minutes 0.0.0.0:49156->22/tcp, 0.0.0.0:49157->5901/tcp hopeful_lumiere
ssh -X -p 49156 ubuntu@localhost

In the Docker container run

startlxde

You should now see a full fledged Desktop running like this!

LXDE running in a Docker container

So your host is running at Ctrl+Alt+F7 and your Docker container is at Ctrl+Alt+F8. Use this option in case you are running many Desktop applications and you want to be totally isolated from the host when working with the applications in the container (i.e you are not using any host applications in conjunction with the applications in the Docker container).

Only run the application that you are interested in

I found this option to be better in some ways. I have my Android Studio setup using this option now.

gautham@ananya:~|? docker run -d -P ananya-desktop:0.0.1
gautham@ananya:~|? ssh -X -p 49156 ubuntu@localhost
ubuntu@localhost's password:
Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-34-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Apr 14 14:25:47 2015 from 172.17.42.1
ubuntu@38280dd66b98:~$ cd android-studio/
ubuntu@38280dd66b98:~/android-studio$ ls
ubuntu@38280dd66b98:~/android-studio$ bin/studio.sh

And lo and behold!

Anroid Studio Running in a Docker Container

Accessing devices within Docker container

A final requirement was whether we can get Docker to detect USB devices. I found that if you pass a –privileged flag and mount the /dev/ device appropriately, you can then access it in the Docker container. I was able to successfully use adb along with my Docker container.

docker run --privileged -v /data:/data -v /dev/bus/usb:/dev/bus/usb -d -P ananya-android:0.0.1

Docker has been a boon and although there are several area of improvement, I see that it has a future. It has become an indispensable tool in our software tools arsenal in jnaapti.

Cross-posted here: http://buzypi.in/2015/04/14/a-docker-based-development-environment/