CSCI 3325
Distributed Systems

Bowdoin College
Fall 2023
Instructor: Sean Barker

Project 1 - Web Server

Release Date:Thursday, September 7.
Acceptance Deadline:Sunday, September 10, 11:59 pm.
Due Date:Sunday, September 24, 11:59 pm.
Collaboration Policy:Level 1
Group Policy:Groups of 2 or 3

In this project, you will implement a basic web server in C using low-level networking primitives. Building your server will teach you the basics of network programming, client/server architectures, and concurrency in high performance servers.

This project should be done in teams of two or three. However, remember that the objective of working in a team is to work as a team. In other words, you should not try to approach the project by splitting up the work; instead, all team members are expected to work on all parts of the project.

Server Specification

Your task is to write a simple web server capable of servicing remote clients by sending them requested files from the local machine. Communication between a client and the server is defined by the Hypertext Transfer Protocol (HTTP). As such, your server will need to understand HTTP requests sent by clients and send HTTP-formatted responses back to clients.

Your server must support (at least) the following subset of functionality of both the HTTP 1.0 and the HTTP 1.1 standards:

The only request headers you need to be concerned with to implement the required functionality are Host and Connection, and the only response headers you need to be concerned with are Date, Content-Length, and Content-Type. However, feel free to extend your server to provide any functionality not required by the base specification.

Your server program must be written in C on Linux and must accept the following two command-line arguments:

For example, you could start the server on port 8888 using the document root serverfiles like the following:

./server -p 8888 -r serverfiles

Command-line options may appear in arbitrary order; therefore, you should use getopt for parsing arguments. Also note that unless your document root starts with a /, it is a relative path, and therefore is interpreted relative to the current working directory. If either command-line option is omitted, the program should exit with an error message.

As in most web servers, requests for a directory (e.g., GET / or GET /catpictures/) should default to fetching index.html inside the specified directory. In other words, index.html is the default filename if no explicit filename is provided.

A good tutorial on the essentials of the HTTP protocol is linked from the resources at the bottom of this writeup.

Starter Files

Your Git repository includes the following provided starter files:

The only file you must modify is server.c. You are welcome to modify the test document root or create other document roots to use during testing. Note that the provided Makefile is configured to compile with all errors turned on and warnings counted as errors; this is done intentionally to ensure that you fix compiler warnings rather than ignore them. Fixing warnings will often teach you something about programming even if the warning in question doesn't represent a bug in the program!

Testing the Server

There are several ways you can test your server. The first is to simply access your server in a browser. For example, if your server is running on port 8888, then you could type hopper.bowdoin.edu:8888/index.html into your web browser to access index.html on the server. However, testing with a browser is not recommended during early development and testing, as browsers will often simply hang or display nothing if your server isn't responding correctly. A more effective initial testing approach is to use telnet, which is a tool for sending arbitrarily-formatted text messages to any network server. For example, below is an example of connecting to google.com on port 80 and then sending an HTTP request for the file index.html:

$ telnet google.com 80
GET /index.html HTTP/1.0


Note that in the above command, there must be two carriage returns (i.e., blank lines) after the "GET" line in order to complete the command. The response to this request will be the HTTP-formatted response from the server. Using telnet will be more initially reliable than a browser, as you will be able to verify that you are getting any response back at all without also having to worry about whether the response is compliant with HTTP.

As an intermediate step between telnet and a full-blown browser, you can also use the wget or curl utilities. These utilities provide command-line HTTP clients: wget will send HTTP/1.0 requests, while curl will send HTTP/1.1 requests (though can be configured to send HTTP/1.0 requests as well). Consult the man pages for details on proper usage.

A recommended testing strategy is to use telnet initially, then move to wget and/or curl, then finally move to a full-blown browser once things seem to be working. The provided sample document root will be useful in testing that HTTP 1.1 is working properly, as the pages with embedded images will be requested through a single connection when accessed via a browser.

warningIMPORTANT: Do not leave your server running when you are not actively testing! Whenever you are done testing, make sure to terminate your server (Control-C), especially before logging off the server. Leaving a server running for long periods will occupy port numbers and is a potential security risk.

Implementation Advice

This section contains tips on implementing various parts of the server.

Parsing Command-Line Arguments

You should use the getopt library function for parsing arguments. The basic idea is that getopt is given a string that specifies all of the possible command-line arguments, some of which may take associated values (which would be both -p and -r here). The string passed to getopt specifies arguments taking a value including a colon : after the associated character (so here, you would want to use p:r:). An idiomatic usage of getopt is to wrap calls to getopt in a while loop, and inside the loop, switch on the return value to process that argument. Within the switch, the predefined global variable optarg will contain the string value passed to that particular argument, which you can use to save each argument value.

For a more detailed reference, here is an example of parsing arguments using getopt.

Primary Loop

At a high level, your core server functionality should be structured something like the following:

Forever loop:
   Accept new connection from incoming client
   Parse HTTP request
   Ensure well-formed request (return error otherwise)
   Determine if target file exists and is accessible (return error otherwise)
   Transmit contents of file to client (by performing reads on the file and writes on the socket)
   Close the connection (if HTTP/1.0)

You have a choice in how you handle multiple clients within the above loop structure. In particular, recall that we discussed three basic approaches to supporting multiple concurrent client connections:

  1. A multi-threaded approach will spawn a new thread for each incoming connection. That is, once the server accepts a connection, it will spawn a thread to parse the request, transmit the file, etc. If you decide to use a multi-threaded approach, you should use the pthreads thread library (e.g., pthread_create), as demonstrated in class.
  2. A multi-process approach maintains a worker pool of active processes to hand requests off to from the main server. This approach has the advantage of portability, given that it does not depend on the presence of any particular threading library. It does face increased context-switch overhead relative to a multi-threaded approach. Creating a new process for every request can also work but is not ideal, as it wastes a significant amount of resources. A better approach is to use pipe to allow your processes to communicate (and thereby avoid just creating a new process every time).
  3. An event-driven architecture will keep a list of active connections and loop over them, performing a little bit of work on behalf of each connection. For example, there might be a loop that first checks to see if any new connections are pending to the server and then loops over all existing client connections and sends a "block" of file data to each (e.g., 4096 bytes). This event-driven architecture has the primary advantage of avoiding any synchronization issues associated with a multi-threaded model and avoids the performance overhead of context switching among threads or processes. Implementing this approach is likely to require using non-blocking sockets (which we did not specifically discuss in class) and the select system call.

A multi-threaded approach will be generally be the most straightforward option, as coordination among processes is more complicated than coordination among threads. An event-driven approach is the most efficient option but also the most complex.

Translating Filenames

Remember that HTTP requests will specify relative filenames (such as index.html) which are translated by the server into absolute local filenames. For example, if your document root is in ~username/cs3325/proj1/mydocroot, then when a request is received for foo.txt, the file that you should read is actually ~username/cs3325/proj1/mydocroot/foo.txt.

The translated filename may exist and be readable, or it may exist but be unreadable (e.g., due to file permissions), or it may not exist at all. A missing file should result in HTTP error code 404, while an inaccessible file should result in HTTP error code 403. You can test trying to access an inaccessible file by changing file permissions using chmod. For example, chmod a-r foo.txt will render foo.txt unreadable by your server (without altering it), while chmod a+r foo.txt will make it readable again.

Remember that the default filename (i.e., if just a directory is specified) is index.html. This convention is why, for instance, the two URLs http://www.bowdoin.edu and http://www.bowdoin.edu/index.html return the same page.

HTTP 1.0 and 1.1

When you fetch an HTML web page in a browser (i.e., a file of type text/html), the browser parses the file for embedded links (such as images) and then retrieves those files from the server as well. For example, if a web page contains four images, then a total of five files will be requested from the server. The primary difference between HTTP 1.0 and HTTP 1.1 is how these multiple files are requested.

Using HTTP 1.0, a separate connection is used for each requested file. While simple, this approach is not the most efficient. HTTP 1.1 attempts to address this inefficiency by keeping connections to clients open, allowing for "persistent" connections and pipelining of client requests. That is, after the results of a single request are returned (e.g., index.html), if using HTTP 1.1, your server should leave the connection open for some period of time, allowing the client to reuse that connection to make subsequent requests. One design decision here is determining how long to keep the connection open. This timeout needs to be configured in the server and ideally should be dynamic based on the number of other active connections the server is currently supporting. Thus if the server is idle, it can afford to leave the connection open for a relatively long period of time. If the server is busy servicing several clients at once, it may not be able to afford to have an idle connection sitting around (consuming kernel/thread resources) for very long. You should develop a simple heuristic to determine this timeout in your server (but feel free to start with a fixed value at first).

Socket timeouts can be set using setsockopt. Another option for implementing timeouts is the select call. As usual, consult the man pages for details.

Working with Strings

Since a significant part of this assignment involves working with strings, you will want to refamiliarize yourself with C's string processing routines, such as strcat, strncpy, strstr, etc. Also remember that pointer arithmetic can often result in cleaner code (e.g., by maintaining pointers that you increment rather than numeric indices that you increment).

Sending and Receiving Network Data

When you send or receive data over a network socket, what you are really doing is reading or copying data to a lower-level network data buffer in the OS. Since these data buffers are limited in size, you may not be able to read or send all desired data at once. In other words, when receiving data, you have no guarantee of receiving the entire request at once, and when sending data, you have no guarantee of sending the entire response at once. As a result, you may need to call send or recv multiple times in the course of handling a single request.

However, an alternative to using the low-level send and recv functions is to use streams, which you have likely used before in the context of file I/O. Using streams allows you to employ higher-level reading and writing functions like fgets (to read an entire line of data) and fprintf (to write formatted data). To construct a stream from a socket descriptor, just use the fdopen function. You can then use the resulting stream with all of the higher-level I/O functions like fgets and fprintf to receive and send data, rather than the lower-level send and recv calls. Doing so is likely to simplify your string processing code.

Synchronization Issues

Any program involving concurrency (e.g., multiple processes or threads) needs to worry about the issue of synchronization, which refers to ensuring a consistent view of shared data across multiple threads of execution. Remember the general principle that shared data (such as any global variable) should not be modified concurrently by more than one thread to avoid potential data corruption. For example, it is unsafe to have two threads simultaneously incrementing a shared counter. One specific example in this project where you might want such a counter is if you want to track the active number of client connections.

To safely handle a situation like this, you should use a lock (also known as a mutex), which allows ensuring that only a single thread has access to a piece of code. The pthread library includes the pthread_mutex_t type for this situation. For example, if lock is a pthread_mutex_t, then you could safely increment some counter across multiple threads as shown below:

pthread_mutex_lock(&lock); // current thread acquires the lock
global_counter++; // safe; only one thread can be holding the lock simultaneously
pthread_mutex_unlock(&lock); // release the lock to another thread

Project Writeup

In addition to the code of your server, you must submit a README document (in plain text format) that contains your names and the following sections:

  1. Design Decisions: Explain and justify any significant design decisions that you made in the course of the project. These decisions should not refer to highly specific, code-level details (e.g., splitting up the code into different functions), but rather to higher-level design decisions that likely have little to do with the nuts and bolts of the code itself. In the case of this specific project, you should definitely address (1) your concurrency design for multiple clients, and (2) how you designed connection timeouts for HTTP 1.1. If there were any other parts of the project that required similar kinds of high-level design decisions, include those too. Don't forget to justify why you made the decisions you made!
  2. Testing: Explain in reasonable detail how you went about testing your server. The purpose of this section is to make you thoughtfully consider both how to test as well as whether you have sufficiently tested. For example, if your only testing consists of sending HTTP 1.0 requests over telnet, you should not have very high confidence that your server is fully functional! Most typically, the "hardest" test that you should be aiming to pass is a sequence of browser requests for pages containing embedded images (which will be requested over HTTP 1.1).
  3. Known Bugs: List any bugs or limitations in functionality that you are aware of. Any information you give here will be helpful to me in fully testing your server and demonstrate that you tested thoroughly yourself. If I come across bugs in my own testing that were not described here, then that will point to a lack of proper testing on your part!

Your writeup should be committed to your repository as a plain text file named README and is due at the same time as your server code.

Logistics and Evaluation

As in Project 0, the link to form a group and initialize your group's project repository on GitHub will be posted to Slack. Once your repository is initialized, clone it to hopper and work there. As a general rule of thumb when working on a group project through GitHub, always pull at the start of a work session and always commit and push at the end of a work session to minimize the chance of a merge conflict. Make sure that your final work (including your writeup) is committed to the repository by the deadline.

To avoid accidentally interfering with the servers of other groups, each group will be assigned a specific (non-standard) port number to use while testing on hopper. Stick to using your assigned port only to avoid conflicting with other groups. However, make sure that you are still able to specify any arbitrary port number via the -p command-line argument. Port assignments will be coordinated over Slack.

Your project will be graded on (1) correctly implementing the server specification, (2) the design and style of your program, and (3) the quality and completeness of your writeup. For guidance on what constitutes good coding design and style, see the Coding Design & Style Guide, which lists many common things to look for. Please ask if you have any other questions about design or style issues. Also don't forget to submit your individual group reports prior to the deadline.

Resources

Here is a list of resources that may be helpful in completing your server: