Kubernetes as Orchestrator for A10 Lightning Controller

A10 Lightning Controller is an application composed of multiple micro-services and works as the management, control and analytics plane of A10 Lightning Application Delivery Service

By default the controller is available to A10 customers as SaaS and they need to deploy only Lightning ADCs in their network.

But, some customers want to deploy controller also in their network for compliance or other (mostly non-technical) reasons.

Bangalore Kubernetes Meetup group thought of doing a session with topic Kubernetes in production 
and gave us (Manu and Myself) a chance to present. Our use case was that not just all the components of A10 Lightning controller run in Kubernetes (aka K8s) but we also package it so that our customer's production also run in K8s. So, this includes packaging and distribution of application as well as running in the scenario when the administrator of the application may not be very K8s savvy. 

Following is what we covered in our talk:

  1. Why we moved to K8s, what design choices we made while porting and what can be done better if we design it from scratch
  2. Issues we faced, our solutions and ongoing research in following areas
    • Scaling each micro service individually
    • Persistence across reboots
    • Persistent data storage
    • Overlay networking
    • Deploying clustered applications

Here is the presentation:

Kubernetes as Orchestrator for A10 Lightning Controller from Akshay Mathur

Here is the video captured by Neependra at meetup:
Videos of other presentations are also available at Neependra's website.
Link to other presentations are in comments of meetup page.

At the end I thank Manu for co-presenting:

Cloud Bursting using A10 Lightning ADS and AWS Lambda

 Cloud Bursting

Cloud bursting is an application deployment model in which an application that normally runs in a private cloud or data center “bursts” into a public cloud when the application needs additional resource (i.e. computing power) and use Cloud Computing for the additional resource requirement.

Think of a scenario in which an e-commerce application is running in a data center and suddenly a few items became popular and a lot of users start checking them.  Suddenly traffic starts building on the website and response starts becoming slower because of the load on servers. The only solution now is to scale the server infrastructure by provisioning more servers to handle the traffic. But provisioning new server on-the-fly is not an option in data center. Public clouds come as savor. The additional server can be launched in a public cloud like AWS and additional traffic can be routed to that server.

So, the Cloud bursting relates to hybrid clouds. The advantage of such a hybrid cloud deployment is that the resources become available on-the-fly and an organization only pays for extra compute resources when they are needed.

Cloud Bursting Architectural Model

The cloud bursting architecture establishes a form of dynamic scaling that scales or “bursts out” on-premise IT resources into a cloud whenever predefined capacity thresholds have been reached. The corresponding cloud-based IT resources are redundantly pre-deployed but remain inactive until cloud bursting occurs. After they are no longer required, the cloud-based IT resources are released and the architecture “bursts in” back to the on-premise environment.

The foundation of this architectural model is based on the automated scaling listener and resource replication mechanisms. The automated scaling listener first determines when to trigger resource replication for deploying the server in cloud.  Next, when the additional resources i.e. servers are ready in cloud, it redirects requests to those servers along with the on-premises servers.

Solution Components

In this solution, we shall see how we can burst into AWS environment with minimal cost. A10 Lightning ADS will work as scaling listener and AWS Lambda functions will work as replication mechanism for this solution. AWS API Gateway will be required to invoke lambda functions from ADS.

In steady state, A10 Lightning ADS will front-end the application traffic and will monitor for server latency. An application server instance is expected in the AWS account in stopped state so that it can be started by the lambda function as needed.

Following presentation was done in AWS Bangalore meetup:


Startups and Partnerships

Every startup I worked with was keen on strategic/channel partnerships in its initial time. These partnerships happen where product or service is complimentary and has great possibility of going together. For the startup, the idea behind partnering with established organization is to have access to their customer base and sales force. For the large organization, primarily it is the breadth of products/services they offer to their customers and some customer acquisition as a bonus.

Image Courtesy: SPRING Singapore

This looks like win-win for everyone and that's why so many strategic/channel partnership deals get signed.

However, I have never seen any customer acquisition happening in reality for the startup. Initially, when my primary role was on tech, I used to ignore these partnerships. By the way I am not the only one saying this, Paul Graham also says in his article:
"Partnerships too usually don't work. They don't work for startups in general, but they especially don't work as a way to get growth started. It's a common mistake among inexperienced founders to believe that a partnership with a big company will be their big break. Six months later they're all saying the same thing: that was way more work than we expected, and we ended up getting practically nothing out of it."

After moving to the business side, I started talking to partners myself and at the same time started investigating why things don't work and how to make them work.

When following up with the partner leadership, I heard the constant answer from everyone, "We shall let you know when any of our customer asks for your kind of product."

Later, a friendly sales person told me his point of view:
"Bringing money from anyone's pocket is a tedious job. So we (the sales people) sell only the items we are most comfortable with and what gives us most benefit meeting our numbers.
Any new product doesn't fit into this criterion. Especially when it comes from a partner startup because customers also don't feel very comfortable with a new product and the original deal may also get derailed.

So! partnerships don't work.
But approaching every small customer with all sales effort is a tedious task too.
Look around, you will see so many partnerships working. What they did to make it work? Is this just the brand name that works? Or there is some solution in this chicken-and-egg situation.

Paul Graham also says in his article that scaling is a good-to-have problem. But till you reach there, founders has to sell themselves. They have to do it manually in a way that doesn't scale and they have to delight the customers. That's the only way to acquire initial customers.
To add to that I have seen that even professional sales people fail to sell a new product during initial times. Founders and other entrepreneurs of the team have to do it.

The trick for making partnerships work came from another friend who is a sales person himself and developed/managed partners for his company.
According to him, you need to sell and give first customer to your partner rather then expecting them to bring first customer to you. When they (partner) see new money (and customer relationship) coming without making sales effort, they start taking interest as they need to service the customer, keep him happy and get more orders from him.
Image Courtesy: RoI Investing

In this process their teams get more familiar with your product and establish a process for repeatable business. You may need to do a lot of effort in this process for the first time (and may be couple more times) but when the channel is setup, it really works best.

According to him, it takes about 2 years for fully nurturing a channel partner.
Though the trick looks fully logical to me and my friend has earned success with this, I am yet to try this. Will post my experience further.

Understanding AWS Shared Responsibility Model for Security

I heard many people saying that they need not worry about security of their application (or it is  automatically PCI compliant) just because the application is hosted in AWS EC2.

This is a big misconception. AWS has good literature about security and it clearly mentions shared security responsibility model.

The presentation shared here was presented in the AWS meetup in Bangalore organized by Jeevan and Habeeb. As purpose of this was just to make people aware of their responsibility, details of any topic are not covered here. Also this does not talks about AWS services in detail.

During the presentation, people wanted to understand details of some AWS services and also wanted to deep dive into each aspect. Many of the questions about AWS were discussed in detail by Shailesh who is architect at AWS and was present in the meetup. For other longer portions, we decided to organize follow up meetup focused on each topic.

Challenges with Application Visibility


After I create a web application and make it generally available to people for use, the first and foremost challenge that comes in front of me is to find out number of servers to be deployed for sustaining the incoming traffic. For determining number of servers I need to know the traffic that is landing on my servers. Knowing aggregated numbers does not help here really because the traffic does not come at a constant rate. Understanding the traffic pattern is interesting. To start with I need to provision servers based on maximum load unless I have a solution that automatically scale my servers based on traffic.
Once I make sure that all the traffic is being taken care properly, I start worrying about the user’s experience. The very first experience builds based on how much time users need to wait after a click to the page load. I majorly divide it into two parts: one is the response time from server and other is the time taken by browser in rendering.
When everything looks good at gross level, time comes to drill down deeper and figure out what areas (URLs) of the applications are being used more than the others. Typically these are the critical parts of application workflow and require special attention for availability as well as response time. When I see that traffic to some URL is going very high, I consider dedicating servers for that traffic and try optimizing that code path.
When high priority infrastructure items are taken care of, I would like to know more about users. First point is how many users are visiting, they are new or returning, after how much time they return, how many pages they look or better what pages they visit in what order, what browsers or devices they use, from what they come, how they are distributed geographically etc.
Next I want to figure out how much of the traffic is coming from non-useful bots so that I can block that traffic somehow and focus on the real traffic. I would also like to detect (as well as prevent) traffic sent with the malicious intent by an attacker. The malicious intent may be DDoS kind of attach for impacting availability of my application or a try to steal data or just an attempt to harm the user interface.
When I come up with new version of my application, I would like to compare its performance with the version currently running. Only if I find the performance acceptable, I would like to expose the new version to all my customers.
Giving the high level visibility into these areas is not enough. When something goes wrong, the tool providing me application visibility should also help me debugging the issue and allow me to drill down to the individual request level.
Google Analytics is generally known to fulfil most of application visibility aspects. However, that does not satisfy the need because of following reasons:

  • Google Analytics only capture the traffic in which HTML page is accessed. All my ajax calls and requests to other resources are not included
  • Google Analytics gets triggers only when JavaScript on the page is executed. Access of the page by bots in not included

So what I see in Google Analytics is the subset of the traffic hitting my servers. When reports say that the bot traffic amounts to more than 30%, I can’t accurately plan based on Google Analytics number. Also Google Analytics being a client side JavaScript tool has no understanding of my deployment infrastructure and neither helps me drill down for issues not help me scale.
A piece that is in traffic path is load balancer. But load balancers like AWS ELB, NginX or HA Proxy provide very limited visibility into traffic and are not really helpful from insights point of view.
[The post has also been published at Appcito Blog]

Techniques for Scaling Application with Security and Visibility in Cloud

When the traffic to your web application serves increase, you need to do a better job than just horizontal or vertical increase in servers.

You not only need to bring the mind shift on the software architecture side and make the application scale-able at various layers but also deal with the challenges with infrastructure optimization.

In the TechNext meetup Pune, the topic was discussed at length carefully observing the challenges, requirements and possible solutions at every stage of growth.

The discussion was around following topics:
  • Architectural mind-shift for scale
  • Infrastructure challenges with traffic growth
    • Load balancing
    • Content switching
    • Content optimization
    • Traffic insights
    • Security against data theft
    • Protection against BOTs and traffic surge
    • DDoS attack prevention
    • Continuous delivery

Social Recommandation that Does NOT works

A few days back a family friend couple came to me with the proposal of exploring a "Business Opportunity using Social Network". Quickly I figured out that they are also telling me the Amway story of Multi Level Marketing (MLM) business.
Image Courtesy: First Class MLM

I am hearing Amway's presentations for about 15 years now and have seen them changing over the time. Typically, the presenter tries hard to convert me into another member in the chain and as soon as s/he feels not happening so s/he leaves.
I have already told this couple that I am not interested in MLM businesses so this couple was just doing their practice with me rather than insisting me to purchase. So, we started discussing more on why there is poor adoption of Amway products.

My wife's first concern was that the products are generally very costly as compare to the products available in the local market. 
They told that Amway says because of higher price point local shopkeepers are not able to keep the products in shops and they are sold only through the agents.
They also told that the products only look costly while actually they are not because they come in very high concentration and are directed  to be used after a lot of dilution.
Then they started sharing their own experience of the products they are using.

Both of the families had to take care of the kids and other household stuff so we stopped the discussion at some point, but I thought a bit more on two things they said:
  • MLM sales channel is better than the traditional channel of distributers and retailers because the consumer is directly purchasing from the manufacturer. In this way company saves the commission of the retail chain and shares the saving with the consumers (i.e. the agents).
  • We anyways recommend products to our friends. In this case we get some incentive on the recommendation.

My mind got puzzled because the social recommendation theory is actually correct. We are following the same in our company ShopSocially and it is giving us good results. This has been proved in multiple researches including the latest Forrester research that the product recommendations by friends and family are most trusted. Data at ShopSocially also suggests that the revenue of all our partner merchants increase after their customers start recommending the products via social media channels.

Social Commerce Strategy : stuff that really works

Then the big Question is why social recommendation works adversely in case of companies like Amway. Why people start creating distance from the friends recommending Amway products.

As soon as incentives come into picture, trust goes down dramatically. Amway products are appreciated only by the people in Amway network. These people get direct incentive when someone purchases any product because of their recommendation. In case of traditional channel, the company or shopkeeper does not offer any incentive for recommending a product so such recommendations are much more trusted.
Image Courtesy: The Telegraph

Additionally, in case of traditional channel, the distributer and retailer take risk of purchasing a product on behalf of customer and this gives a very comfortable feeling. Another big comfort that typically all local shopkeeper provide for minimizing the risk of customers is by allowing to purchase small quantities for trial and no money-back guaranty can beat that.

Still there was a small question, ShopSocially's partner merchants also incentives their customers for sharing the feedback and comments about the products, then why the recommendations still trusted?
I got the answer when I was able to differentiate between incentivized sharing and incentivized recommendation. In when incentive comes to user just on sharing, the user is still free to write the views about the product. It does not matter if the views are positive or negative.
But if the incentive comes only when someone buys on your recommendation, you have to write positive. It does not matter that you really like the product or not. This is why everyone connected to a MLM company speaks the same language. And this exactly why such social recommendations, even by well connected people, are not trusted even after 50 years.