HTTP in Depth

Content

  • What is HTTP?
  • How Does HTTP Work?
  • HTTP Requests
  • HTTP Responses
  • HTTP Vs HTTPS
  • HTTP1 Vs HTTP2/0
  • HTTP Proxies

What is HTTP?

HTTP is the set of rules for transferring files over the internet.

How Does HTTP Work?

HTTP allows for data to be transmitted between user devices and servers over the internet. The web browser sends a request for the files that are required for the page to load, and the server responds with those files if the request is successful. Simply put, the HTTP protocol is a way for client devices and web servers to interact with each other.

Requests

HTTP supports several types of requests:

  • GET: A get request asks to retrieve a specific resource. For example, when you visit “https://fake-website.com/login,” your web browser sends a GET request for the ‘login’ file from the web root (where the website files for “fake-website” are stored)
  • HEAD: A HEAD request only asks for the metadata of a specified source without asking for the data itself. This sort of request is more common used by developers who are testing their site
  • POST: A POST request is used to send data without expecting a response back. Typically used to collect information from users
  • PUT: A PUT request is used to ask a web server to store data being sent by the web browser in the specified resource (if authorised). For example, if a PUT request was sent to “https://fake-website.com/aboutme,” the data sent from the web browser would replace the information in that file if it was authorised to do so.
  • DELETE: A DELETE request deletes the resource specificed by the web browser if it is authorised to do so.

Responses

The HTTP response is split into three parts: a status cde, a header and an optional body. The HTTP status code indicates the status of the request. The response header will contain the metadata about the request such as the response length and name of the server. The HTTP response body contains the requested data in HTML code if the request is successful.

HTTP Status Codes

In response to HTTP requests, servers issue response codes which indicate things such as an error in the request or if the file was found. Some common examples of Status codes are:

  • 200: Ok, the GET or POST request worked and is being acted upon
  • 300: The URL of the requested resource has been permanently changed
  • 401: The client making a request to the server has not been authenticated
  • 404: The most frequent error code, it means that the URL has not been found

HTTPS

HTTPS utilizes the use of Secure Sockets Layer (SSL) or Transport Layer Security (TLS) in addition to the regular HTTP. HTTPS encrypts and decrypts HTTP requests and responses returned by the web server. Additionally, it protects from eavesdropping and man-in-the-middle attacks. The added layer of security is crucial for creating a safe and trustworthy website nowadays.

HTTP/1 Vs HTTP/2

HTTP/1 has been around for a very long time, and although it is human-readable, it has a number of inefficiencies. For example, text-based protocols are a lot less efficient than binary protocols for computers to communicate with each other. The goal of HTTP/2 is to reduce latency (time it takes for data to pass between one network to another. It does so with plenty of new features like:

  • compressed HTTP header
  • Pipelining of requests (multiple HTTP requests without waiting for a response)
  • bundled requests and responses over TCP connection (multiplexing)

HTTP Proxies

Proxy servers are the servers, computers or other machines that go between the client device and server. The proxy acts as an intermediary between the client and the server. An advantage of a proxy is that frequently visited sites are likely to be contained in the proxy’s cache, which improves user response time. Web developers typically use proxies for:

  • Caching: Cache servers can save web pages locally, allowing for faster content retrieval
  • Authentication:control to access privileges to applications and online info
  • Logging:The storage of historical data
  • Web filtering: controlling access to malicious or inappropriate web pages
  • Load balancing: requests can be handled by many servers instead of just one

Leave a Comment

Your email address will not be published. Required fields are marked *