Sunday, October 15, 2023

Useful Reads

  • https://www.forbes.com/sites/miriamgrobman/2022/01/31/peer-to-boss-the-important-transition-no-one-told-you-about/?sh=765a613635fb
  • https://www.trainingdr.com/blog/the-first-3-conversations-to-have-when-you-are-promoted-to-management-over-your-peers-2
  • https://hbr.org/2007/11/making-the-shift-from-peer-to

Friday, July 15, 2022

Load Balancers

 






A load balancer accepts incoming traffic from clients and routes requests to its registered targets (such as EC2 instances) in one or more Availability Zones. The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets. When the load balancer detects an unhealthy target, it stops routing traffic to that target. It then resumes routing traffic to that target when it detects that the target is healthy again.

You configure your load balancer to accept incoming traffic by specifying one or more listeners. A listener is a process that checks for connection requests. It is configured with a protocol and port number for connections from clients to the load balancer. Likewise, it is configured with a protocol and port number for connections from the load balancer to the targets.

Elastic Load Balancing supports the following types of load balancers:

  • Application Load Balancers

  • Network Load Balancers

  • Gateway Load Balancers

  • Classic Load Balancers

There is a key difference in how the load balancer types are configured. With Application Load Balancers, Network Load Balancers, and Gateway Load Balancers, you register targets in target groups, and route traffic to the target groups. With Classic Load Balancers, you register instances with the load balancer.







Reference: 


Friday, July 1, 2022

Caching Solutions

 CACHING SOLUTION

REDIS:

Interactive tutorial : http://try.redis.io/

Alternative to Redis : Memcached




Caching Best Practices 

  1. Validity
  2. High Hitrate
  3. Cache Miss
  4. TTL

There can be three ways to write in the cache

  1. Cache Aside
  2. Read Through cache 
  3. Write through - Write to the Cache, then write to the DB and then return success 
  4. Write around - Write to the DB and return success. When there is a cache miss the cache would be updates.
  5. Write back - Write to the Cache and return success. there is another system which asynchronously 
Fault Tolerance for Cache
  1. Regular Interval Snapshot
  2. Log Reconstruction

Learn more about caching using this video 

How to design a Distributed Cache ?








Reading 

https://codeahoy.com/2017/08/11/caching-strategies-and-how-to-choose-the-right-one/

https://www.instaclustr.com/blog/redis-vs-memcached


Thursday, June 30, 2022

General Topics for Distributed System

 

What is Service Mash ?


Google File System


Map Reduce



Fallacies of Distributed Systems

  1. The network is reliable;
  2. Latency is zero;
  3. Bandwidth is infinite;
  4. The network is secure;
  5. Topology doesn't change;
  6. There is one administrator;
  7. Transport cost is zero;
  8. The network is homogeneous.

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing 


Software Quality
https://asq.org/quality-resources/software-quality