Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

Revision 1117975 of An overview of HTTP

  • Revision slug: Web/HTTP/Overview
  • Revision title: An overview of HTTP
  • Revision id: 1117975
  • Created:
  • Creator: bunnybooboo
  • Is current revision? No
  • Comment edit of my over-zealous use of Web Browser (capital B)

Revision Content

{{HTTPSidebar}}

HTTP is a {{glossary("protocol")}} which allows the fetching of resources, such as HTML documents. It is the foundation of any data exchange on the Web and a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance text, layout description, images, videos, scripts, and more.

A Web document is the composition of different resources

Clients and servers communicate by exchanging individual messages (as opposed to a stream of data). The messages sent by the client, usually a Web browser, are called requests and the messages sent by the server as an answer are called responses.

HTTP as an application layer protocol, on top of TCP (transport layer) and IP (network layer) and below the presentation layer.Designed in the early 1990s, HTTP is an extensible protocol which has evolved over time. It is an application layer protocol that is sent over {{glossary("TCP")}}, or over a {{glossary("TLS")}}-encrypted TCP connection, though any reliable transport protocol could theoretically be used. Due to its extensibility, it is used to not only fetch hypertext documents, but also images and videos or to post content to servers, like with HTML form results. HTTP can also be used to fetch parts of documents to update Web pages on demand.

Components of HTTP-based systems

HTTP is a client-server protocol: requests are sent by one entity, the user-agent (or a proxy on behalf of it). Most of the time the user-agent is a browser, but it can be anything, for example a robot that crawls the Web to populate and maintain a search engine index.

Each individual request is sent to a server that will handle it and provide an answer, called the response. In between there are numerous entities, collectively designed as {{glossary("Proxy", "proxies")}}, that perform different operations and act as gateways or {{glossary("Cache", "caches")}}, for example.

In reality, there are more computers between a browser and the server handling the request: there are routers, modems, and more. Thanks to the layered design of the Web, these are hidden in the network and transport layers. HTTP is on top at the application layer. Though important to diagnose network problems, the underlying layers are mostly irrelevant to a description of HTTP.

Client: the user-agent

Literally, the user-agent is any tool that acts on the behalf of the user. Practically, this role is performed by the browser; the few exceptions being programs used by engineers, and Web developers to debug their applications.

The browser is always the entity initiating the request. It is never the server (though some mechanism have been added over the years to simulate server-initiated messages).

To present a Web page, the browser sends an original request to fetch the HTML document of the page, then parses it and fetches additional requests corresponding to scripts to execute, layout information (CSS) to display, as well as the sub-resources contained in the page (usually images and videos). It then combines these resources to present a complete document, the Web page. Scripts executed by the browser can fetch more resources in later phases and the browser updates the Web page accordingly.

A Web page is an hypertext document, meaning that some part of the displayed text are links that can be activated (usually by a click of the mouse) to fetch a new Web page, allowing the user to control its user-agent and to navigate the Web. The browser translates these orders in HTTP requests and interprets the HTTP responses to present the user with an intelligible response.

The Web server

On the opposite side of the communication channel, there is the server that serves the document as requested by the client. A server is only a single machine virtually: it may be a collection of servers that share the load (load balancing) or a complex piece of software interrogating other computers (like cache, a DB server, e-commerce servers, …) and generating the document, totally or partially, on the fly.

A server is not necessarily a single machine, but also several servers can be hosted on the same machine. With HTTP/1.1 and the {{HTTPHeader("Host")}} header, they can even share the same IP address.

Proxies

Between a browser and the server, numerous computers and machines relay the HTTP messages. Thanks to the layered structure of the Web stack, most of them operate at the transport, network or physical level and are transparent at the HTTP layer (though they may have a significant impact on performance). The ones operating at the application layers are generally called proxies. They can be transparent or not (they change the requests going through them or not) and can perform numerous functions:

  • caching (the cache can be public or private, like the browser cache)
  • filtering (like an antivirus scan, parental controls, …)
  • load balancing to allow multiple servers to serve the different requests
  • authentication to control access to different resources
  • logging, allowing to store historical information.

Basic aspects of HTTP

HTTP is simple

Even with some more complexity introduced in HTTP/2 by encapsulating HTTP messages into frames, HTTP is generally designed to be human readable and simple. HTTP messages can be read and understood by a human, which allows easier testing and lowers the bar for new-comers.

HTTP is extensible

Introduced in HTTP/1.0, HTTP headers made the protocol extremely easy to extend and to experiment with. New functionality can be introduced simply by an agreement between a client and a server about the semantic of new headers.

HTTP is stateless, but not sessionless

HTTP is stateless: there is no link between two successive requests being done on the same connection or not. This proved problematic as soon as users wanted to interact with a page in a coherent way, like for an e-commerce shopping basket, for example. Using the header extensibility, the HTTP Cookies have been added allowing to create sessions with each requests to share the same context, the same state.

While the core of HTTP itself is stateless, the use of cookies allows to create stateful sessions.

HTTP and connections

A connection is controlled at the transport layer and is fundamentally out of scope for HTTP. Nevertheless, HTTP doesn't require the underlying transport protocol to be connection-based; it only requires it to be reliable, that is not to loose messages (at least not without an error). Among the two most common transport protocols on the Internet, TCP is reliable and UDP isn't. HTTP relies therefore on TCP that happens to be also connection-based, though this is not a requirement.

HTTP/1.0 was opening a TCP connection for each request/response exchange, but TCP has two major flows: opening a connection needs several round-trips of messages and is therefore slow, but also becomes more efficient when several messages has been sent, and are regularly sent: warm connections are more efficient than cold ones.

In order to mitigate these flows, HTTP/1.1 introduced the notion of pipelining (that proved difficult to implement) and the notion of persistent connection: the underlying TCP connection can be partially controlled using the {{HTTPHeader("Connection")}} header. HTTP/2 went a step further by multiplexing messages over a single connection, helping to keep it warm.

Experiments are in progress to design a better transport protocol that would be better suited for HTTP. For example, Google is experimenting with QUIC that builds on top of UDP to provide a more reliable and efficient transport protocol.

What can be controlled by HTTP

The extensible nature of HTTP has allowed to control more and more functionality of the Web over the years. Cache or authentication methods are functions controlled early by HTTP, on the other hand, the ability to relax the origin constraint has been added only in the 2010s.

This is a list of common features controllable using HTTP.

  • Cache
    How documents are cached can be controlled by HTTP. The server can instruct proxies and clients what to cache and for how long, while the client can instruct intermediate cache proxies to ignore the stored document.
  • Relaxing the origin constraint
    To prevent snooping or other privacy invasions, browsers enforce a strict separation between Web sites. Only pages from the same origin can access to the whole information of a Web page. Such constraint is a burden, and servers, via HTTP headers, can relax it, so that the document may be a patchwork of information coming from different domains (there are security-related reasons to do so in some cases).
  • Authentication
    Some pages can be protected so that only specific users can access it. Basic authentication can be provided directly by HTTP, either using the {{HTTPHeader("Authenticate")}} and similar headers, or by setting a specific session using HTTP cookies.
  • Proxy and tunneling
    Often servers or clients, or both, are located on intranets and hide their true address to others. HTTP requests go then through proxies to cross the network barrier. Not all proxies are HTTP proxies, some, like those using the SOCKS protocol, operate at a lower level (and other protocols, like ftp can be handled by them too).
  • Sessions
    The use of cookies allows to link requests with a state on the server. This creates sessions, although basic HTTP is a state-less protocol. This is useful not only for e-commerce shopping baskets, but also for any site allowing configuration of the output.

HTTP flow

When the client wants to communicate with a server, either being the final server or an intermediate proxy, it performs the following steps:

  1. Open a TCP connection (or reuse of a previous one): The TCP connection will be used to send one or several requests, as well as getting the answer. The client may reuse an existing connection, or open several TCP connections to the servers.
  2. Send an HTTP message: HTTP messages (before HTTP/2) are human-readable. With HTTP/2, these simple messages are encapsulated in frames, making them impossible to read directly, but the principle remains the same.
    GET / HTTP/1.1
    Host: developer.mozilla.org
    Accept-Language: fr
  3. Read the response sent by the server:
    HTTP/1.1 200 OK
    Date: Sat, 09 Oct 2010 14:28:02 GMT
    Server: Apache
    Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
    ETag: "51142bc1-7449-479b075b2891b"
    Accept-Ranges: bytes
    Content-Length: 29769
    Content-Type: text/html
    
    <!DOCTYPE html... (here comes the 29769 bytes of the requested web page)
  4. Close or reuse the connection for further requests.

When HTTP pipelining is activated, several requests can be sent successfully, without waiting for the first response to be fully received. HTTP pipelining has been proven difficult to implement in existing networks, where old pieces of software coexist with modern versions, and has been replaced by the more robust mechanism of multiplexing requests in a frame in HTTP/2.

HTTP Messages

HTTP/1.1 and earlier HTTP messages are human-readable. In HTTP/2, these messages are embedded into a new binary structure, a frame, allowing optimizations like compression of headers and multiplexing. Even if only part of the original HTTP message is sent in this version of HTTP, the semantics of each message is unchanged and the client reconstitutes (virtually) the original HTTP/1.1 request; it is therefore still useful to consider HTTP/2 messages in the HTTP/1.1 format.

There are two types of HTTP messages, requests and responses, each with its specific format.

Requests

An example HTTP request:

A basic HTTP request

Requests consists of the following elements:

  • An HTTP method, usually a verb like {{HTTPMethod("GET")}}, {{HTTPMethod("POST")}} or a noun like {{HTTPMethod("OPTIONS")}} or {{HTTPMethod("HEAD")}} that defines the operation the client wants to perform. Typically, a client wants to fetch a resource (using GET) or post the value of an HTML form (using POST), though other operations may be needed in other cases.
  • The path of the resource to fetch, it is basically the URL of the resource stripped from elements that are obvious from the context, that is without the {{glossary("protocol")}} (https://), the {{glossary("domain")}} (here developer.mozilla.org), or the TCP {{glossary("port")}} (here 80).
  • The version of the HTTP protocol.
  • Optional headers that convey extra-information for the servers.
  • For some methods like POST, a body, similar to the one in responses, that contains the resource sent.

Responses

An example responses:

Responses consists of the following elements:

  • The version of the HTTP protocol they follow.
  • A status code, indicating if the request has been successful or not, and why.
  • A status message, that is a non-authoritative short description of the status code.
  • HTTP headers, like for requests.
  • Optionally, but much more common than in requests, a body containing the fetched resource.

Conclusion

HTTP is an extensible protocol that is easy to use. The client-server structure combined with the ability to easily add headers allows HTTP to grow with the extended capabilities of the Web.

Even if HTTP/2 is adding some complexity by embedding HTTP messages in frames to improve performance, from the point of view of the application, the basic structure of messages has been the same since the release of HTTP/1.0. The flow of a session remains simple, allowing it to be investigated and to debugged with a simple HTTP message monitor.

Revision Source

<div>{{HTTPSidebar}}</div>

<p class="summary"><strong>HTTP</strong> is a {{glossary("protocol")}} which allows the fetching of resources, such as HTML documents. It is the foundation of any data exchange on the Web and a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance text, layout description, images, videos, scripts, and more.</p>

<p><img alt="A Web document is the composition of different resources" src="https://mdn.mozillademos.org/files/13677/Fetching_a_page.png" style="height:319px; width:545px" /></p>

<p>Clients and servers communicate by exchanging individual messages (as opposed to a stream of data). The messages sent by the client, usually a Web browser, are called <em>requests</em> and the messages sent by the server as an answer are called <em>responses</em>.</p>

<p><img alt="HTTP as an application layer protocol, on top of TCP (transport layer) and IP (network layer) and below the presentation layer." src="https://mdn.mozillademos.org/files/13673/HTTP%20&amp;%20layers.png" style="float:left; height:299px; padding-bottom:15px; padding-right:20px; width:418px" />Designed in the early 1990s, HTTP is an extensible protocol which has evolved over time. It is an application layer protocol that is sent over {{glossary("TCP")}}, or over a {{glossary("TLS")}}-encrypted TCP connection, though any reliable transport protocol could theoretically be used. Due to its extensibility, it is used to not only fetch hypertext documents, but also images and videos or to post content to servers, like with HTML form results. HTTP can also be used to fetch parts of documents to update Web pages on demand.</p>

<h2 id="Components_of_HTTP-based_systems" style="clear:both">Components of HTTP-based systems</h2>

<p>HTTP is a client-server protocol: requests are sent by one entity, the user-agent (or a proxy on behalf of it). Most of the time the user-agent is a browser, but it can be anything, for example a robot that crawls the Web to populate and maintain a search engine index.</p>

<p>Each individual request is sent to a server that will handle it and provide an answer, called the <em>response</em>. In between there are numerous entities, collectively designed as {{glossary("Proxy", "proxies")}}, that perform different operations and act as gateways or {{glossary("Cache", "caches")}}, for example.</p>

<p><img alt="" src="https://mdn.mozillademos.org/files/13679/Client-server-chain.png" style="height:121px; width:819px" /></p>

<p>In reality, there are more computers between a browser and the server handling the request: there are routers, modems, and more. Thanks to the layered design of the Web, these are hidden in the network and transport layers. HTTP is on top at the application layer. Though important to diagnose network problems, the underlying layers are mostly irrelevant to a description of HTTP.</p>

<h3 id="Client_the_user-agent">Client: the user-agent</h3>

<p>Literally, the user-agent is any tool that acts on the behalf of the user. Practically, this role is performed by the browser; the few exceptions being programs used by engineers, and Web developers to debug their applications.</p>

<p>The browser is <strong>always</strong> the entity initiating the request. It is never the server (though some mechanism have been added over the years to simulate server-initiated messages).</p>

<p>To present a Web page, the browser sends an original request to fetch the HTML document of the page, then parses it and fetches additional requests corresponding to scripts to execute, layout information (CSS) to display, as well as the sub-resources contained in the page (usually images and videos). It then combines these resources to present a complete document, the Web page. Scripts executed by the browser can fetch more resources in later phases and the browser updates the Web page accordingly.</p>

<p>A Web page is an hypertext document, meaning that some part of the displayed text are links that can be activated (usually by a click of the mouse) to fetch a new Web page, allowing the user to control its user-agent and to navigate the Web. The browser translates these orders in HTTP requests and interprets the HTTP responses to present the user with an intelligible response.</p>

<h3 id="The_Web_server">The Web server</h3>

<p>On the opposite side of the communication channel, there is the server that <em>serves</em> the document as requested by the client. A server is only a single machine virtually: it may be a collection of servers that share the load (load balancing) or a complex piece of software interrogating other computers (like cache, a DB server, e-commerce servers, …) and generating the document, totally or partially, on the fly.</p>

<p>A server is not necessarily a single machine, but also several servers can be hosted on the same machine. With HTTP/1.1 and the {{HTTPHeader("Host")}} header, they can even share the same IP address.</p>

<h3 id="Proxies">Proxies</h3>

<p>Between a browser and the server, numerous computers and machines relay the HTTP messages. Thanks to the layered structure of the Web stack, most of them operate at the transport, network or physical level and are transparent at the HTTP layer (though they may have a significant impact on performance). The ones operating at the application layers are generally called <strong>proxies</strong>. They can be transparent or not (they change the requests going through them or not) and can perform numerous functions:</p>

<ul>
 <li>caching (the cache can be public or private, like the browser cache)</li>
 <li>filtering (like an antivirus scan, parental controls, …)</li>
 <li>load balancing to allow multiple servers to serve the different requests</li>
 <li>authentication to control access to different resources</li>
 <li>logging, allowing to store historical information.</li>
</ul>

<h2 id="Basic_aspects_of_HTTP">Basic aspects of HTTP</h2>

<h3 id="HTTP_is_simple">HTTP is simple</h3>

<p>Even with some more complexity introduced in HTTP/2 by encapsulating HTTP messages into frames, HTTP is generally designed to be human readable and simple. HTTP messages can be read and understood by a human, which allows easier testing and lowers the bar for new-comers.</p>

<h3 id="HTTP_is_extensible">HTTP is extensible</h3>

<p>Introduced in HTTP/1.0, <a href="/en-US/docs/Web/HTTP/Headers">HTTP headers</a> made the protocol extremely easy to extend and to experiment with. New functionality can be introduced simply by an agreement between a client and a server about the semantic of new headers.</p>

<h3 id="HTTP_is_stateless_but_not_sessionless">HTTP is stateless, but not sessionless</h3>

<p>HTTP is stateless: there is no link between two successive requests being done on the same connection or not. This proved problematic as soon as users wanted to interact with a page in a coherent way, like for an e-commerce shopping basket, for example. Using the header extensibility, the HTTP Cookies have been added allowing to create sessions with each requests to share the same context, the same state.</p>

<p>While the core of HTTP itself is stateless, the use of cookies allows to create stateful sessions.</p>

<h3 id="HTTP_and_connections">HTTP and connections</h3>

<p>A connection is controlled at the transport layer and is fundamentally out of scope for HTTP. Nevertheless, HTTP doesn't require the underlying transport protocol to be connection-based; it only requires it to be <em>reliable</em>, that is not to loose messages (at least not without an error). Among the two most common transport protocols on the Internet, TCP is reliable and UDP isn't. HTTP relies therefore on TCP that happens to be also connection-based, though this is not a requirement.</p>

<p>HTTP/1.0 was opening a TCP connection for each request/response exchange, but TCP has two major flows: opening a connection needs several round-trips of messages and is therefore slow, but also becomes more efficient when several messages has been sent, and are regularly sent: <em>warm</em> connections are more efficient than <em>cold</em> ones.</p>

<p>In order to mitigate these flows, HTTP/1.1 introduced the notion of pipelining (that proved difficult to implement) and the notion of persistent connection: the underlying TCP connection can be partially controlled using the {{HTTPHeader("Connection")}} header. HTTP/2 went a step further by multiplexing messages over a single connection, helping to keep it warm.</p>

<p>Experiments are in progress to design a better transport protocol that would be better suited for HTTP. For example, Google is experimenting with <a href="https://en.wikipedia.org/wiki/QUIC">QUIC</a> that builds on top of UDP to provide a more reliable and efficient transport protocol.</p>

<h2 id="What_can_be_controlled_by_HTTP">What can be controlled by HTTP</h2>

<p>The extensible nature of HTTP has allowed to control more and more functionality of the Web over the years. Cache or authentication methods are functions controlled early by HTTP, on the other hand, the ability to relax the origin constraint has been added only in the 2010s.</p>

<p>This is a list of common features controllable using HTTP.</p>

<ul>
 <li><em>Cache</em><br />
  How documents are cached can be controlled by HTTP. The server can instruct proxies and clients what to cache and for how long, while the client can instruct intermediate cache proxies to ignore the stored document.</li>
 <li><em>Relaxing the origin constraint</em><br />
  To prevent snooping or other privacy invasions, browsers enforce a strict separation between Web sites. Only pages from the <strong>same origin</strong> can access to the whole information of a Web page. Such constraint is a burden, and servers, via HTTP headers, can relax it, so that the document may be a patchwork of information coming from different domains (there are security-related reasons to do so in some cases).</li>
 <li><em>Authentication</em><br />
  Some pages can be protected so that only specific users can access it. Basic authentication can be provided directly by HTTP, either using the {{HTTPHeader("Authenticate")}} and similar headers, or by setting a specific session using HTTP cookies.</li>
 <li><em>Proxy and tunneling</em><br />
  Often servers or clients, or both, are located on intranets and hide their true address to others. HTTP requests go then through proxies to cross the network barrier. Not all proxies are HTTP proxies, some, like those using the SOCKS protocol, operate at a lower level (and other protocols, like ftp can be handled by them too).</li>
 <li><em>Sessions</em><br />
  The use of cookies allows to link requests with a state on the server. This creates sessions, although basic HTTP is a state-less protocol. This is useful not only for e-commerce shopping baskets, but also for any site allowing configuration of the output.</li>
</ul>

<h2 id="HTTP_flow">HTTP flow</h2>

<p>When the client wants to communicate with a server, either being the final server or an intermediate proxy, it performs the following steps:</p>

<ol>
 <li>Open a TCP connection (or reuse of a previous one): The TCP connection will be used to send one or several requests, as well as getting the answer. The client may reuse an existing connection, or open several TCP connections to the servers.</li>
 <li>Send an HTTP message: HTTP messages (before HTTP/2) are human-readable. With HTTP/2, these simple messages are encapsulated in frames, making them impossible to read directly, but the principle remains the same.
  <pre class="line-numbers  language-html">
<code class="language-html">GET / HTTP/1.1
Host: developer.mozilla.org
Accept-Language: fr</code></pre>
 </li>
 <li>Read the response sent by the server:
  <pre class="line-numbers  language-html">
<code class="language-html">HTTP/1.1 200 OK
Date: Sat, 09 Oct 2010 14:28:02 GMT
Server: Apache
Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
ETag: "51142bc1-7449-479b075b2891b"
Accept-Ranges: bytes
Content-Length: 29769
Content-Type: text/html

&lt;!DOCTYPE html... (here comes the 29769 bytes of the requested web page)</code></pre>
 </li>
 <li>Close or reuse the connection for further requests.</li>
</ol>

<p>When HTTP pipelining is activated, several requests can be sent successfully, without waiting for the first response to be fully received. HTTP pipelining has been proven difficult to implement in existing networks, where old pieces of software coexist with modern versions, and has been replaced by the more robust mechanism of multiplexing requests in a frame in HTTP/2.</p>

<h2 id="HTTP_Messages">HTTP Messages</h2>

<p>HTTP/1.1 and earlier HTTP messages are human-readable. In HTTP/2, these messages are embedded into a new binary structure, a frame, allowing optimizations like compression of headers and multiplexing. Even if only part of the original HTTP message is sent in this version of HTTP, the semantics of each message is unchanged and the client reconstitutes (virtually) the original HTTP/1.1 request; it is therefore still useful to consider HTTP/2 messages in the HTTP/1.1 format.</p>

<p>There are two types of HTTP messages, requests and responses, each with its specific format.</p>

<h3 id="Requests">Requests</h3>

<p>An example HTTP request:</p>

<p><img alt="A basic HTTP request" src="https://mdn.mozillademos.org/files/13687/HTTP_Request.png" style="height:336px; width:693px" /></p>

<p>Requests consists of the following elements:</p>

<ul>
 <li>An HTTP <a href="/en-US/docs/Web/HTTP/Methods">method</a>, usually a verb like {{HTTPMethod("GET")}}, {{HTTPMethod("POST")}} or a noun like {{HTTPMethod("OPTIONS")}} or {{HTTPMethod("HEAD")}} that defines the operation the client wants to perform. Typically, a client wants to fetch a resource (using <code>GET</code>) or post the value of an <a href="/en-US/docs/Web/Guide/HTML/Forms">HTML form</a> (using <code>POST</code>), though other operations may be needed in other cases.</li>
 <li>The path of the resource to fetch, it is basically the URL of the resource stripped from elements that are obvious from the context, that is without the {{glossary("protocol")}} (<code>https://</code>), the {{glossary("domain")}} (here <code>developer.mozilla.org</code>), or the TCP {{glossary("port")}} (here <code>80</code>).</li>
 <li>The version of the HTTP protocol.</li>
 <li>Optional <a href="/en-US/docs/Web/HTTP/Headers">headers</a> that convey extra-information for the servers.</li>
 <li>For some methods like <code>POST</code>, a body, similar to the one in responses, that contains the resource sent.</li>
</ul>

<h3 id="Responses">Responses</h3>

<p>An example responses:</p>

<p><img alt="" src="https://mdn.mozillademos.org/files/13691/HTTP_Response.png" style="height:494px; width:758px" /></p>

<p>Responses consists of the following elements:</p>

<ul>
 <li>The version of the HTTP protocol they follow.</li>
 <li>A <a href="/en-US/docs/Web/HTTP/Status">status code</a>, indicating if the request has been successful or not, and why.</li>
 <li>A status message, that is a non-authoritative short description of the status code.</li>
 <li>HTTP <a href="/en-US/docs/Web/HTTP/Headers">headers</a>, like for requests.</li>
 <li>Optionally, but much more common than in requests, a body containing the fetched resource.</li>
</ul>

<h2 id="Conclusion">Conclusion</h2>

<p>HTTP is an extensible protocol that is easy to use. The client-server structure combined with the ability to easily add headers allows HTTP to grow with the extended capabilities of the Web.</p>

<p>Even if HTTP/2 is adding some complexity by embedding HTTP messages in frames to improve performance, from the point of view of the application, the basic structure of messages has been the same since the release of HTTP/1.0. The flow of a session remains simple, allowing it to be investigated and to debugged with a simple <a href="/en-US/docs/Tools/Network_Monitor">HTTP message monitor</a>.</p>
Revert to this revision