Thursday, May 6, 2021

Following an HTTP GET / through Switches, Routers, Gateways, and Proxies (Detailed Examples)

 


In this networking video, I’ll explain the difference between a gateway and a proxy but also illustrate the purpose behind routers and switches in the process. I’ll execute an HTTP GET request and follow its on-wire representation across multiple networks. Follow timestamps for the 3 examples I'll use. 




Friday, December 4, 2020

Will AWS Babelfish Succeed in Moving Developers Away from SQL Server to Postgres?

In AWS re-invent, Amazon announced open-sourcing Babelfish for PostgreSQL, a SQL Server-compatible end-point for PostgreSQL to make PostgreSQL fluent in understanding communication from apps written for SQL Server. Let us discuss what is this technology and whether if it's gonna really move developers away from Microsoft SQL Server to Postgres

Resources https://aws.amazon.com/blogs/opensource/want-more-postgresql-you-just-might-like-babelfish/
Watch the video here




Tuesday, April 21, 2020

System Design and Backend Engineering Videos From Zero to Hero

This collection of videos covers major system design concepts and fundamentals that every backend engineer needs to understand. This will help you in your design interviews. The videos are ordered in the way they should be consumed.

Backend Engineering Playlist 


Wednesday, March 11, 2020

SameSite Cookie Attribute Explained by Example (Strict, Lax, None & No SameSite)

The recent version of Chrome has broke some workflows with samesite cookies. So a few weeks ago I made a video discussing the samesite Attribute change in chrome and how it is a great change that will end CSRF.

 It looks like Chrome 80 is officially out now and websites are broken or stuck in infinite loops. This is because Cookies without samesite Attribute are treated as samesite lax which means cookies will not be sent except if it is a GET request and top-level navigation clicking on a link

Saturday, February 29, 2020

gRPC Pros & Cons

gRPC (gRPC Remote Procedure Calls) is an open source remote procedure call (RPC) system initially developed at Google in 2015. It uses HTTP/2 for transport, Protocol Buffers as the message format. In this video I want to explore gRPC, go through examples, pros and cons of gRPC.

Client/ Server communication - SOAP - HTTP (REST) - WebSockets Client Libraries gRPC gRPC Demo - todos gRPC Pros and Cons Pros - Fast two/uni and request - Unform - One library to rule them all - Progress feedback( long synchronous requests) drop pluggable wait...) - cancel request - All benefits of H2 and Protobuff Cons - schema based (not everyone wants schema) - Thick client - limited languages - Proxies still don’t understand it - Still young - Error handling - No native browser support - Timeouts, circuit breaker just like any RPC (pub/sub rules in this case) Can you create your own protocol? - Spotify example with Hermes Source Code https://github.com/hnasr/javascript_playground/tree/master/grpc-demo Timecode Motivation - 4:30 -Client Server Communication 4:30 -THe problem with client libraries 8:40 - Why GRPc gRPC - modes 16:40 - Unary - 17:20 - server streaming 17:40 - client streaming 18:30 - bidirectional 19:10 Coding : 19:40 Pros & Cons 57:00 Build own : 1:12:30 cards 00:30 H/2 7:00 graphQL 8:00 WebSockets 23:00 protobuff Resources https://grpc.io/docs/guides/ Haproxy grpc https://www.haproxy.com/blog/haproxy-1-9-2-adds-grpc-support/ Nginx grpc https://www.google.com/amp/s/www.nginx.com/blog/nginx-1-13-10-grpc/amp/ https://grpc.io/docs/guides/concepts/ ๐Ÿญ Software Architecture Videos https://www.youtube.com/playlist?list=PLQnljOFTspQXNP6mQchJVP3S-3oKGEuw9 ๐Ÿ’พ Database Engineering Videos https://www.youtube.com/playlist?list=PLQnljOFTspQXjD0HOzN7P2tgzu7scWpl2 ๐Ÿ›ฐ Network Engineering Videos https://www.youtube.com/playlist?list=PLQnljOFTspQUBSgBXilKhRMJ1ACqr7pTr ๐Ÿฐ Load Balancing and Proxies Videos https://www.youtube.com/playlist?list=PLQnljOFTspQVMeBmWI2AhxULWEeo7AaMC ๐Ÿ˜ Postgres Videos https://www.youtube.com/playlist?list=PLQnljOFTspQWGrOqslniFlRcwxyY94cjj ๐ŸšขDocker https://www.youtube.com/playlist?list=PLQnljOFTspQWsD-rakNw1C20c1JI8UR1r ๐Ÿงฎ Programming Pattern Videos https://www.youtube.com/playlist?list=PLQnljOFTspQV1emqxKbcP5esAf4zpqWpe ๐Ÿ›ก Web Security Videos https://www.youtube.com/playlist?list=PLQnljOFTspQU3YDMRSMvzflh_qXoz9zfv ๐Ÿฆ  HTTP Videos https://www.youtube.com/playlist?list=PLQnljOFTspQU6zO0drAYHFtkkyfNJw1IO ๐Ÿ Python Videos https://www.youtube.com/playlist?list=PLQnljOFTspQU_M83ARz8mDdr4LThzkBKX ๐Ÿ”† Javascript Videos https://www.youtube.com/playlist?list=PLQnljOFTspQWab0g3W6ZaDM6_Buh20EWM Stay Awesome, Hussein














Saturday, December 21, 2019

Database Engines Crash Course (MyISAM, Aria, InnoDB, XtraDB, LevelDB & RocksDB)

Database Engines Crash Course (MyISAM, Aria, InnoDB, XtraDB, LevelDB & RocksDB)









Database engines or storage engines or sometimes even called embedded databases is software library that a database management software uses to store data on disk and do CRUD (create update delete) Embedded means move everything in one software no network client-server. In this video I want to go through the few popular database engines, explain the differences between them and finally I want to spin up a database and change its engine and show the different features on each engine Timecodes What is a database Engine 3:00 myISAM 9:43 Aria 16:30 InnoDB 19:00 XtraDB 25:30 LevelDB 27:40 RocksDB 34:00 SQLite 38:11 BerkelyDB 42:00 Demo! 47:11 
Cards ACID 4:30 mysql/javascript 56:17

Resources https://youtu.be/V_C-T5S-w8g https://mariadb.com/kb/en/library/changes-improvements-in-mariadb-102/ https://mariadb.com/kb/en/library/why-does-mariadb-102-use-innodb-instead-of-xtradb/ https://github.com/facebook/rocksdb/wiki/Features-Not-in-LevelDB https://mariadb.com/kb/en/library/aria-storage-engine/ https://dev.mysql.com/doc/refman/8.0/en/innodb-index-types.html https://eng.uber.com/mysql-migration/


Saturday, November 2, 2019

Javascript by Example 3 hour free course


This is a practical Javascript course by example, we will build a calculator from scratch in this course enjoy.





Chapter 1 Title: Getting Started Time Code: 5:00 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode01 Chapter 2 Title: Building User Interface Time Code: 21:12 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode02 Chapter 3 Title: DOM, Events & Functions Time Code: 50:30 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode03 Chapter 4 Title: Arrow Functions Time Code: 1:28:05 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode04 Chapter 5 Title: Evaluate Expressions (eval) Time Code: 1:44:44 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode05 Chapter 6 Title: Conditions Time Code: 1:56:16 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode06 Chapter 7 Title: Running on Mobile Time Code: 2:07:39 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode07 Chapter 8 Title: CSS, Arrays & Loops Time Code: 2:20:55 Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode08 Chapter 9 Title: Debugging Time Code: 2:42:31  Source Code: https://github.com/hnasr/javascript-by-example/tree/L1-Episode09 Stay Awesome! Hussein

Sunday, August 4, 2019

Layer 4 vs Layer 7 Load Balancing Pros and Cons


Load balancing is the process of balancing incoming requests to multiple machines, processes or services. In this video, we will explain two types of load balancers, layer 4 and layer 7. You can watch the video or read the summary below.







Layer 4 Load balancer (HAProxyNLB)

Forwards packets based on basic rules, it only knows IP and PORT and perhaps latency of the target service. That is what is available at Layer 4/3. This load balancer doesn't look at the content so it doesn't know the protocol whether its HTTP or not, it doesn't know the URL or the path or the resource you are consuming or whether you are using GET or POST.


Pros
  • Great for simple packet-level load balancing
  • Fast and efficient doesn’t look at the data
  • More secure as it can't really look at your packets. So if it was compromised no one can look at the data.
  • Doesn't need to decrypt the content it merely forwards whatever content in it. 
  • Uses NAT 
  • One connection between client and server NATed so your load balancer can serve a maximum number of tcp connections = to (number of servers * max connections per server) 
Cons
  • Can't do smart load balancing based on the content, such as switch request based on the requested media type
  • Can't do real microservices with this type
  • Has to be sticky as it is a stateful protocol (all segments) once a connection is established it goes to one server at the backend. All packets flowing to this connection goes to one server. The next connection will pick another server based on the algorithm. 



Layer 7 (Nginx , HAProxy) 


This type of proxy actually looks at the content and have more context, it knows you are visiting the /users resource so it may forward it to a different server. Essential and great for microservices, it knows the content is video:image and it can do compression. It can add its own headers so other proxies or load balancers can see that this has passed through a proxy. It can also cache, we can't really do caching on layer 4 because we have no clue whats in the packets. 

But this is expensive because it has to decrypt and look and compute.

Pros
  • Smart routing based on the URL (microservices) flexible 
  • Provide caching
Cons
  • Expensive need to decrypt
  • Security, you have to share your certificate with the load balancers. if an attacker compromised the load balancer they have access to all your data.
  • Proxy creates multiple connections (client to proxy/proxy to server) So you are bounded by the max TCP connection on your load balancer. Example if your load balancer supports 200 max TCP connections and you have 5 backend servers the load balancer servers each with 200 max connection. Your load balancer can only serve (concurrently) (200 - 5) clients. 5 connections are from the load balancer to each backend server and 195 available for clients. Where if that was a layer 4 load balancer you can server 200*5 connections.

With all those cons we almost have to use Layer 7 load balancer because the benefits outway the cons, especially that we don't have resources constraints. 




Sunday, June 23, 2019

Transport Layer Secuirty - HTTP, HTTPS, TLS 1.2 and TLS 1.3


TLS which stands for transport layer security is a protocol for securing communication between client and server. Specifically for HTTPS. Thats what the S is stands for. 

In this video, we will learn how insecure vanilla HTTP works, HTTPS, then we will learn how HTTPS is possible via the transport layer security and finally we will talk about the improvements in 1.3 that was published August 2018. 




Vanilla HTTP 

Before we discuss TLS, HTTPS or anything else lets go through how HTTP request work. You can type in the browser www.husseinnasser.com , the OSI magic kicks in, client figures out the IP address of husseinnasser.com by calling the DNS which uses UDP. Then HTTP application layer makes a GET / request passes in the IP address and port 80 (default for insecure http). This creates an underlying TCP connection. GET / string among other stuff into the packet and send it over. TCP does its thing server receives GET / calls the appropriate process at the backend which could be just return index.html sets content type text/html and sends back big response for client. All of this obviously is plain text no encryption any kind and if you watched the OSI video we made you can tell that people can sniff/snoop packets and get packets they aren’t supposed to get


HTTPS

Works by negotiating a symmetric key so they can both secure messages. Watch the video we did on encryption. Before we jump to GET request there must be a handshake ๐Ÿค that must occur between the client and server. The tricky part is exchanging that key. Same thing as above except port is 443 instead of 80. Remember once we lose the TCP connection we will have to renegotiate the key. But beauty of this is HTTP is stateless so it remains working just fine. 


TLS 1.2 handshake ๐Ÿค 

The original TLS handshake involves 4 roundtrips. A client hello which the client includes which encryption algorithms it supports (Both symmteric and asymmetric). The server receives the request then replies back with the server certificate which includes the server public key and also the encryptions that they will change to. The client receives the server hello, generates the premaster key, encrypts it with the server’s public key then send it over. The Server decrypts the message, gets the premaster generates the symmetric key finally tells the client that we are good to go. 


TLS 1.3 handshake ๐Ÿค 

TLS 1.3 involves much shorter and much secure communication using only deffie hellman as key exchange and just two round trips. 


Resources




Saturday, June 15, 2019

Denial Of Service Attack Explained

DOS attacks (denial of service) are type of attack on a server to prevent users from consuming a particular service, usually this is an HTTP web server.  This could happen by either saturating the bandwidth of the pipe going to the server or by bringing the server down to its knees so it stops taking requests all together. In this video we will learn about 3 different types of DOS attacks and explain each one by example.




Bandwidth based DOS

Single DOS

Dos Sending Huge amount of data to a server with lower bandwidth from a client with higher bandwidth which ends up saturating the server pipe and queue up future requests, new requests will have to wait or perhaps denied service. Example, the attacker have 100mb/s bandwidth (upload) the server has 10Mb/s download. If the attacker starts sending 100 mb worth of data to the server, it will take it 1 second to leave the pipe. However, The server can only download 10 mb each second for processing because thats its bandwidth, so it needs 10 seconds to completely download that 100mb and process. In this 10 seconds the server is fully busy serving just 1 client. Other requests will not be able to even reach the server, they might get queued and they may never be executed. Thus denied service. It is important to know that the server must have an end point that actually accept such large data. Like upload file with no limit. Another example, is UDP where there is no connection.

Distributed DOS

Ddos this previous scenario is less likely since servers usually has much more bandwidth than a single computer. A common attack is to do a DOS in distributed manner. Assume a server with 1 Gb and client with 10 mb/s no matter how much data the client can send it can only send 10mb per second, and the server can go through them real quick. Example, the client sends 1GB, it will leave the client’s pipe into 100 (10mb) means the client will take 100 seconds just to upload all the data because it can only sends 10 mb each seconds. And the server is processing it so fast it each second and it will still have enough bandwidth to process other requests(1000-10). But imagine 100 users with 10 mb connection each, all coordinate to send 1 Gb worth of data to the server at the same time (critical that its in the same time) 100x10 each second they can send 1 Gb in total to the server, the server can only process 1 GB per second so the server will not be able to process any other requests because its bandwidth is saturated processing this 1 GB from different place. Make it 200 users and you just clogged the pipe.


Max connections based DOS

Another type of denial of service attack is by somehow force the server to reach its max connections. The web server usually sets a maximum number of tcp connections so that it doesn’t run out of memory. an attacker can perform a DOS attack to force the server to reach its max connection. once it does, it wont accept any more connections thus deny service of future requests. However it is not easy, web servers have good preventive measures to minimize  unnecessary tcp connections. So you cannot just establish a connection and ghost the server. This isn’t your ex boyfriend. Server has good timeouts for connections that are idle, terminated or potentially harmful. However one possible attack is to establish a connection but send the data slowly so when the server tries to timeout it immediately reset the timeout and keep the connection alive! Assuming the max tcp connection is 200, Run your script 200 times and you just created 200 connections to the server so no new connection can connect.

Servers do preventive measures to prevent alot of connections from the same client. In this case you wont be able to execute the 200 slow connections,
Do this multiple times from different machines you reached the maximum.

Vulnerability based DOS

Another way to deny service is to destroy the server all together. If an attacker knows a zero day vulnerability, like say a buffer stack overflow that can be executed on an input that completely overflow the stack buffer and overwrite piece of memory to either terminate the process all together or even worse execute malicious code to gain control over the server.







Sunday, May 26, 2019

When to use GET vs POST? (Caching vs Request size)

GET and POST are the most popular http methods used on the web. Each carries its own differences and properties. It can confusing to get to choose when to use POST over GET. I made a video explaining my take on the differences, use cases and the benefits of using GET and POST. Check it out, here is a summary table.

Property
GET
POST
Body
No
Yes
Data Request Limit
Yes (2048 bytes)
No limit
Data Type
Ascii only
Any data
Safe
Yes
No
Idempotent
Yes
No
Caching and Prefetching
Yes
No
Bookmarkable
Yes
No
Security?
No
Yes


Watch the video here 






Saturday, March 30, 2019

Sidecar pattern in Service Mesh (Explained by Example)


Sidecar Pattern is an architecture where two or more processes living in the same host can communicate with other via the loopback (localhost) essentially enabling interprocess communication. It is the foundation architecture on which service meshes such ad LinkerD and Envoy is built on.

 In this video, we will explain, how we do things the classical way and how the sidecar pattern works, the pros and cons. Sidecar pattern also enabled the service mesh such as linkerd and istio that make microservices even better.  

While sidecar pattern popularized in the containerized environment you can use it in the non-containerized environment as well. 

Pros 
* Decoupling thick libraries and references,. 
* Applications can evolve independently. 
* Polyglot - Each sidecar application can be written in its own language

Cons
* Latency 
* Complexity 



Stay Awesome!
Hussein

Saturday, January 12, 2019

To compute on the edge or on the cloud, that is the question.






It’s interesting. When I got my first computer in 1995, I was so excited to buy PC magazine and get a CD with a bunch of video games demo and trial software etc. what all the software had in common back then is they run completely independent from a server. Plug, install and Play. That comes with a limitation; Storage and compute. Developers have constrained to PC machines with limited resources and writing compute-heavy software was difficult because PCs can’t run them.

Slowly, software started to take a different shape when client-server architecture emerged. You will install the “client” piece software on your PC and the rest of the software lives on a server somewhere. This made developers offload the compute and storage to the server making the client code much “thinner”. This also allowed vendors to mass produce affordable client end-user hardware (laptops, pc, phones) since the compute required to run client software is minimum. This pattern is still dominant especially with the emergence of the cloud ☁️.

In this day, client hardware is so much more powerful it is a waste to not utilize. That is why developers started to run compute jobs on the “edge” (another word for the client) to avoid the latency of sending a job to the cloud and wait for the result. This also served to be beneficial in case the edge network is intermittent.

Will we eventually move back to running everything on the client? Will sending jobs to the Cloud becomes more expensive than executing it locally?

It's interesting.


Sunday, November 25, 2018

My New 2018 Video Course - Python on the Backend




Click to check out the course 50% of for the holidays!


Do you know Python and want to talk it to the next level? How about writing a website in Python, or an API so your fellow developers can consume in JSON over simple HTTP. With the boom of microservices and API, developers who are used to working with Python writing scripts can now take their knowledge to the backend. This course will teach you the basic of web servers, how to setup the Python Web server and write interesting cool applications on the backend.

What are the requirements?

Able to understand basic programming principles

What am I going to get from this course? 

Build cool web applications and APIs for other clients to consume Developers will be able to serve a basic website with python Turn your python script into a web API

What is the target audience? 

Beginners python developers who are interested in building HTTP web APIs in python

Check out the course 50% off for the Holiday's 


https://www.udemy.com/python-on-the-back-end-for-beginners-http-server/?couponCode=PYBACK2018


Sunday, November 18, 2018

My New Book for 2018 just published - Learn GIS Web Programming with ArcGIS Javascript API 4.9 and ArcGIS Online



GIS Programming kindle book
click on the image to purchase the ebook on Amazon




In late 2012 I got an email from a book publisher author with a proposal to author a book. They have found me through my blog and linked a blog post I wrote back in 2009 on ArcGIS Server technology. I accepted their offer and wrote the book and 3 others that followed.

This makes me question, why did I write that original blog post? I didn’t know that one day a publisher will google that technology and find my post and make me an offer to write a book. I can’t remember why exactly I wrote that post but I knew that I was having fun doing it. Sharing my experience with the world through this writing always felt good to me.

In August 2017 I started a new series called Getting Started with ArcGIS Javascript API 4.x on YouTube. That series became really popular. The interaction on that series inspired me to write this book to discuss things I might have missed in that video series and to distill all my findings, knowledge into a book. So if you are new to the YouTube channel consider subscribing to check out more content over there https://www.youtube.com/igeometry

Today, I decided to re-live the experience of writing a book. However, no publisher is backing up this book. This book is written from the heart, full of joy, from me to you. It is a brain dump of what I think will be a very beneficial work for you guys.

As of the time of writing this preface, I did not pick a title of this book. And I’m feeling good about this. I know the topic and I can imagine how the book will look like. However, I feel that picking a title will force my thoughts through a narrow path and thus limit the potential of what this book could be. Obviously, if you are reading this that means I have already picked a title.

This book is about building web maps using Javascript technology. I picked Javascript because it is a resilient light-weight technology that can run on both the server and the client, mobile, IOT and supercomputer machines. 

Traditional technology books discuss tools. “This is how to load a web map in a browser”. “This is how to query the rest endpoint”. “This is how to render a 3d map”. You get a catalog of tools and what they do. There is nothing wrong with this format. In fact, it is a good reference. However, you don’t get any context when reading such books to take action and build something. It is like learning what is a hammer, nail and screwdriver does but these tools are useless if no one shows you how to build a table using these tools.

I like to write my books by example, where I build an app and in the process explain the various tools I’m planning to use to build this app. Personally, I feel this is a better way of learning as it gives context.

I hope you enjoy this book.

What are we building in this book ?
We will be building a web mapping application from scratch. For tourists, we are building an app that helps users locate landmarks. The app shows the landmarks in a map such as libraries, cafes, restaurants schools and much more. It has a search capability to search for landmarks where they will be highlighted on the map. It also shows the nearby landmarks within specific miles from current location. So you can answer interesting questions such as show me all libraries within 100 feet of this coffee shop or are there any liquor stores within a mile from this school? I will be providing you with the sample data which I created myself, this data is not real it is just sample. All we need is to write the application. The app will run on both mobile and desktop.

Don’t worry if this seems like a lot. We will break down those functionalities into different chapters and slowly walk through each.

Who this book is written for?
Anyone interested in learning how to build a web mapping application. Basic programming knowledge is recommended but not required. I will explain all that is required as we go through the book.

System Requirements
I designed this book in a way so you don’t require a special or license to get started. I will be using a mac in this book but will include instructions for Windows and Linux. We will use ArcGIS Online free account to host our landmark data and ArcGIS Javascript API 4.x to write the web application. I will provide that data in GeoJSON format so we can upload it to ArcGIS Online.

Software Requirements
All you need on your machine is a text editor to write code and a web server to serve the static files. I will be using Node JS as a web server and Visual Studio Code as the text editor. We will take care of the download and installation of those two in chapter 1.


Saturday, November 3, 2018

What is State Transfer in REST architecture really mean?


Understanding State Transfer in REST 


One of the most critical properties of the REST Architecture (Representational State Transfer) is the protocol is stateless and the state gets transferred between the client and the server. I personally always found this to be confusing until I really learned architecture by actually using it. In this video, I will explain the state transfer in REST by example. 

In a stateful architecture, the client makes a request to the server and the server “remembers” the client. The next request from the client will be retrieved from the state stored locally in the server. The pros of this are the server will pick up where they left off with each request, so request throughput is higher in stateful architecture. Another advantage of the client can send less data through the wire too. The cons of this architecture are if the server is down, the request cannot be fulfilled and the client is forced to disconnect and reconnect again to another server anyway and go through the entire process.

However, REST is a stateless architecture where every request is responsible to “bring” as much information about the client as possible for the server to reconstruct the state from scratch. This means that no matter what server the client hit, the request will always be fulfilled so you get higher availability. This is where the state transfer in REST came from. Disadvantages of this architecture is the client now sends more information through the wire, thus your application consumes more bandwidth as a result, this is less of an issue with the introduction of protocol buffers and HTTP2. Another disadvantage is the throughput goes down since each request has to wait for the state to “replay” and get constructed. 

Hope you guys enjoy the video
Check out the other content of this channel




For more software engineering videos click this link www.husseinnasser.com/softwareengineering

Hussein Nasser



Thursday, October 25, 2018

Announcing my Podcast

Podcast

If you are interested in GIS and Software Engineering, you will enjoy my podcast. Check it out! 


Saturday, October 6, 2018

Product Architect vs Solutions Architect

In this episode of #softwaretalk, we discuss the differences between the software product architect and a solutions architect. We start by defining the difference between a software product and a solution. Then we discuss the responsibilities of product architect vs solution architect.

If you are interested to be a solution or product architect or engineer you came to the right place.

Cheers
Hussein




Friday, October 5, 2018

Reverse Engineering Twitter

This is our reverse engineering series where we pick a mainstream app and try to understand how the developers built it, how the APIs are designed on the backend and how the front-end user experience is designed for performance, efficiency and business decisions. We can become better software engineers by learning how the likes of Google, Facebook, and Twitter are building APIs and user experiences. Obviously, I might make a mistake here and there but that is part of the fun! In this episode, we try to reverse engineer the Twitter feed. We discuss how the IOS Twitter App is doing efficient thumbnail caching and insane client queuing of tweets, likes and retweets actions. Enjoy!




Enjoy!
Hussein