What Connects Web Application, Dumb Charades, Hindi Movies & Wikipedia


Many years back I came to Pune and started staying in a housing complex called Nisarg City. As my son started going to play school, we started meeting other parents during the pick and drop duty. Slowly parents also became fast friends - especially who were living in same or nearby housing complexes. 

At some point, 4 of us used to walk together - rain or shine. 

Can you guess the topic of discussion during most morning walks? 

No, it was never politics. Mostly it was about solving a technical/programming problem. Else we used to plan a family trip together. 

In the evening - especially on weekends, families used to sit together till late - chit-chatting and enjoying. One of the family was a lot into playing Dumb Charades and slowly dragged all of us into it. 

It has been about 6 years since I moved to Bangalore. Whatsapp came and we got connected again. But unfortunately never got a chance sitting together.

But, I still remember and cherish those days. Nothing matches roaming, eating and playing together. During those days, we played Dumb Charades so many times that finding a movie name (that was not already played many times) started becoming a challenge. Sometimes, we used to go to Wikipedia in search for movie names. 

Earlier this year - just before Covid era - we settled in the new housing complex in Bangalore (after 5 year long stay in the individual house) and lockdown happened.

Diwali of 2020 brought back both - Dumb Charades as well as family get-togethers - back in our life.

In memory of the old days and with the anticipation of nice time forward, I thought of creating a simple application that randomly brings a movie name (and other available context) from Wikipedia to help play Dumb Charades. 

First version was a Python script - quick and dirty. However, Product Manager in me insisted for much better user experience. This resulted in creation of a web application that can also be used from anywhere using mobile phone.

Enjoy the app at http://mathurakshay.github.io/dumbc.html


PS: If you are interested in the technical aspect of the app, please look at https://github.com/mathurakshay/mathurakshay.github.io.

Kubernetes as Orchestrator for A10 Lightning Controller

A10 Lightning Controller is an application composed of multiple micro-services and works as the management, control and analytics plane of A10 Lightning Application Delivery Service

By default the controller is available to A10 customers as SaaS and they need to deploy only Lightning ADCs in their network.

But, some customers want to deploy controller also in their network for compliance or other (mostly non-technical) reasons.

Bangalore Kubernetes Meetup group thought of doing a session with topic Kubernetes in production 
and gave us (Manu and Myself) a chance to present. Our use case was that not just all the components of A10 Lightning controller run in Kubernetes (aka K8s) but we also package it so that our customer's production also run in K8s. So, this includes packaging and distribution of application as well as running in the scenario when the administrator of the application may not be very K8s savvy. 

Following is what we covered in our talk:

  1. Why we moved to K8s, what design choices we made while porting and what can be done better if we design it from scratch
  2. Issues we faced, our solutions and ongoing research in following areas
    • Scaling each micro service individually
    • Persistence across reboots
    • Persistent data storage
    • Overlay networking
    • Deploying clustered applications

Here is the presentation:

Kubernetes as Orchestrator for A10 Lightning Controller from Akshay Mathur

Here is the video captured by Neependra at meetup:
Videos of other presentations are also available at Neependra's website.
Link to other presentations are in comments of meetup page.

At the end I thank Manu for co-presenting:

Cloud Bursting using A10 Lightning ADS and AWS Lambda

 Cloud Bursting

Cloud bursting is an application deployment model in which an application that normally runs in a private cloud or data center “bursts” into a public cloud when the application needs additional resource (i.e. computing power) and use Cloud Computing for the additional resource requirement.

Think of a scenario in which an e-commerce application is running in a data center and suddenly a few items became popular and a lot of users start checking them.  Suddenly traffic starts building on the website and response starts becoming slower because of the load on servers. The only solution now is to scale the server infrastructure by provisioning more servers to handle the traffic. But provisioning new server on-the-fly is not an option in data center. Public clouds come as savor. The additional server can be launched in a public cloud like AWS and additional traffic can be routed to that server.

So, the Cloud bursting relates to hybrid clouds. The advantage of such a hybrid cloud deployment is that the resources become available on-the-fly and an organization only pays for extra compute resources when they are needed.

Cloud Bursting Architectural Model

The cloud bursting architecture establishes a form of dynamic scaling that scales or “bursts out” on-premise IT resources into a cloud whenever predefined capacity thresholds have been reached. The corresponding cloud-based IT resources are redundantly pre-deployed but remain inactive until cloud bursting occurs. After they are no longer required, the cloud-based IT resources are released and the architecture “bursts in” back to the on-premise environment.

The foundation of this architectural model is based on the automated scaling listener and resource replication mechanisms. The automated scaling listener first determines when to trigger resource replication for deploying the server in cloud.  Next, when the additional resources i.e. servers are ready in cloud, it redirects requests to those servers along with the on-premises servers.

Solution Components

In this solution, we shall see how we can burst into AWS environment with minimal cost. A10 Lightning ADS will work as scaling listener and AWS Lambda functions will work as replication mechanism for this solution. AWS API Gateway will be required to invoke lambda functions from ADS.

In steady state, A10 Lightning ADS will front-end the application traffic and will monitor for server latency. An application server instance is expected in the AWS account in stopped state so that it can be started by the lambda function as needed.

Following presentation was done in AWS Bangalore meetup:


Startups and Partnerships

Every startup I worked with was keen on strategic/channel partnerships in its initial time. These partnerships happen where product or service is complimentary and has great possibility of going together. For the startup, the idea behind partnering with established organization is to have access to their customer base and sales force. For the large organization, primarily it is the breadth of products/services they offer to their customers and some customer acquisition as a bonus.

Image Courtesy: SPRING Singapore

This looks like win-win for everyone and that's why so many strategic/channel partnership deals get signed.

However, I have never seen any customer acquisition happening in reality for the startup. Initially, when my primary role was on tech, I used to ignore these partnerships. By the way I am not the only one saying this, Paul Graham also says in his article:
"Partnerships too usually don't work. They don't work for startups in general, but they especially don't work as a way to get growth started. It's a common mistake among inexperienced founders to believe that a partnership with a big company will be their big break. Six months later they're all saying the same thing: that was way more work than we expected, and we ended up getting practically nothing out of it."

After moving to the business side, I started talking to partners myself and at the same time started investigating why things don't work and how to make them work.

When following up with the partner leadership, I heard the constant answer from everyone, "We shall let you know when any of our customer asks for your kind of product."

Later, a friendly sales person told me his point of view:
"Bringing money from anyone's pocket is a tedious job. So we (the sales people) sell only the items we are most comfortable with and what gives us most benefit meeting our numbers.
Any new product doesn't fit into this criterion. Especially when it comes from a partner startup because customers also don't feel very comfortable with a new product and the original deal may also get derailed.

So! partnerships don't work.
But approaching every small customer with all sales effort is a tedious task too.
Look around, you will see so many partnerships working. What they did to make it work? Is this just the brand name that works? Or there is some solution in this chicken-and-egg situation.

Paul Graham also says in his article that scaling is a good-to-have problem. But till you reach there, founders has to sell themselves. They have to do it manually in a way that doesn't scale and they have to delight the customers. That's the only way to acquire initial customers.
To add to that I have seen that even professional sales people fail to sell a new product during initial times. Founders and other entrepreneurs of the team have to do it.

The trick for making partnerships work came from another friend who is a sales person himself and developed/managed partners for his company.
According to him, you need to sell and give first customer to your partner rather then expecting them to bring first customer to you. When they (partner) see new money (and customer relationship) coming without making sales effort, they start taking interest as they need to service the customer, keep him happy and get more orders from him.
Image Courtesy: RoI Investing

In this process their teams get more familiar with your product and establish a process for repeatable business. You may need to do a lot of effort in this process for the first time (and may be couple more times) but when the channel is setup, it really works best.

According to him, it takes about 2 years for fully nurturing a channel partner.
Though the trick looks fully logical to me and my friend has earned success with this, I am yet to try this. Will post my experience further.

Understanding AWS Shared Responsibility Model for Security

I heard many people saying that they need not worry about security of their application (or it is  automatically PCI compliant) just because the application is hosted in AWS EC2.

This is a big misconception. AWS has good literature about security and it clearly mentions shared security responsibility model.

The presentation shared here was presented in the AWS meetup in Bangalore organized by Jeevan and Habeeb. As purpose of this was just to make people aware of their responsibility, details of any topic are not covered here. Also this does not talks about AWS services in detail.

During the presentation, people wanted to understand details of some AWS services and also wanted to deep dive into each aspect. Many of the questions about AWS were discussed in detail by Shailesh who is architect at AWS and was present in the meetup. For other longer portions, we decided to organize follow up meetup focused on each topic.

Challenges with Application Visibility


After I create a web application and make it generally available to people for use, the first and foremost challenge that comes in front of me is to find out number of servers to be deployed for sustaining the incoming traffic. For determining number of servers I need to know the traffic that is landing on my servers. Knowing aggregated numbers does not help here really because the traffic does not come at a constant rate. Understanding the traffic pattern is interesting. To start with I need to provision servers based on maximum load unless I have a solution that automatically scale my servers based on traffic.
Once I make sure that all the traffic is being taken care properly, I start worrying about the user’s experience. The very first experience builds based on how much time users need to wait after a click to the page load. I majorly divide it into two parts: one is the response time from server and other is the time taken by browser in rendering.
When everything looks good at gross level, time comes to drill down deeper and figure out what areas (URLs) of the applications are being used more than the others. Typically these are the critical parts of application workflow and require special attention for availability as well as response time. When I see that traffic to some URL is going very high, I consider dedicating servers for that traffic and try optimizing that code path.
When high priority infrastructure items are taken care of, I would like to know more about users. First point is how many users are visiting, they are new or returning, after how much time they return, how many pages they look or better what pages they visit in what order, what browsers or devices they use, from what they come, how they are distributed geographically etc.
Next I want to figure out how much of the traffic is coming from non-useful bots so that I can block that traffic somehow and focus on the real traffic. I would also like to detect (as well as prevent) traffic sent with the malicious intent by an attacker. The malicious intent may be DDoS kind of attach for impacting availability of my application or a try to steal data or just an attempt to harm the user interface.
When I come up with new version of my application, I would like to compare its performance with the version currently running. Only if I find the performance acceptable, I would like to expose the new version to all my customers.
Giving the high level visibility into these areas is not enough. When something goes wrong, the tool providing me application visibility should also help me debugging the issue and allow me to drill down to the individual request level.
Google Analytics is generally known to fulfil most of application visibility aspects. However, that does not satisfy the need because of following reasons:

  • Google Analytics only capture the traffic in which HTML page is accessed. All my ajax calls and requests to other resources are not included
  • Google Analytics gets triggers only when JavaScript on the page is executed. Access of the page by bots in not included

So what I see in Google Analytics is the subset of the traffic hitting my servers. When reports say that the bot traffic amounts to more than 30%, I can’t accurately plan based on Google Analytics number. Also Google Analytics being a client side JavaScript tool has no understanding of my deployment infrastructure and neither helps me drill down for issues not help me scale.
A piece that is in traffic path is load balancer. But load balancers like AWS ELB, NginX or HA Proxy provide very limited visibility into traffic and are not really helpful from insights point of view.
[The post has also been published at Appcito Blog]

Techniques for Scaling Application with Security and Visibility in Cloud

When the traffic to your web application serves increase, you need to do a better job than just horizontal or vertical increase in servers.

You not only need to bring the mind shift on the software architecture side and make the application scale-able at various layers but also deal with the challenges with infrastructure optimization.

In the TechNext meetup Pune, the topic was discussed at length carefully observing the challenges, requirements and possible solutions at every stage of growth.

The discussion was around following topics:
  • Architectural mind-shift for scale
  • Infrastructure challenges with traffic growth
    • Load balancing
    • Content switching
    • Content optimization
    • Traffic insights
    • Security against data theft
    • Protection against BOTs and traffic surge
    • DDoS attack prevention
    • Continuous delivery