The first web application I built was called Terrania. A visitor could come to the web site, create a virtual creature with some customizations, and then track that creature's progress through a virtual world. Creatures would wander about, eat plants (or other creatures), fight battles, and mate with other players' creatures. This activity would then be reported back to players by twice-daily emails summarizing the day's events.
Calling it a web application is a bit of a stretch; at the time I certainly wouldn't have categorized it as such. The core of the game was a program written in C++ that ran on a single machine, loading game data from a single flat file, processing everything for the game "tick," and storing it all again in a single flat file. When I started building the game, the runtime was destined to become the server component of a client-server game architecture. Programming network data-exchange at the time was a difficult process that tended to involve writing a lot of rote code just to exchange strings between a server and client (we had no .NET in those days).
The Web gave application developers a ready-to-use platform for content delivery across a network, cutting out the trickier parts of client-server applications. We were free to build the server that did the interesting parts while building a client in simple HTML that was trivial in comparison. What would have traditionally been the client component of Terrania resided on the server, simply accessing the same flat file that the game server used. For most pages in the "client" application, I simply loaded the file into memory, parsed out the creatures that the player cared about, and displayed back some static information in HTML. To create a new creature, I appended a block of data to the end of a second file, which the server would then pick up and process each time it ran, integrating the new creatures into the game. All game processing, including the sending of progress emails, was done by the server component. The web server "client" interface was a simple C++ CGI application that could parse the game datafile in a couple of hundred lines of source.
This system was pretty satisfactory; perhaps I didn't see the limitations at the time because I didn't come up against any of them. The lack of interactivity through the web interface wasn't a big deal as that was part of the game design. The only write operation performed by a player was the initial creation of the creature, leaving the rest of the game as a read-only process. Another issue that didn't come up was concurrency. Since Terrania was largely read-only, any number of players could generate pages simultaneously. All of the writes were simple file appends that were fast enough to avoid spinning for locks. Besides, there weren't enough players for there to be a reasonable chance of two people reading or writing at once.
A few years would pass before I got around to working with something more closely resembling a web application. While working for a new media agency, I was asked to modify some of the HTML output by a message board powered by UBB (Ultimate Bulletin Board, from Groupee, Inc.). UBB was written in Perl and ran as a CGI. Application data items, such as user accounts and the messages that comprised the discussion, were stored in flat files using a custom format. Some pages of the application were dynamic, being created on the fly from data read from the flat files. Other pages, such as the discussions themselves, were flat HTML files that were written to disk by the application as needed. This render-to-disk technique is still used in low-write, high-read setups such as weblogs, where the cost of generating the viewed pages on the fly outweighs the cost of writing files to disk (which can be a comparatively very slow operation).
The great thing about the UBB was that it was written in a "scripting" language, Perl. Because the source code didn't need to be compiled, the development cycle was massively reduced, making it much easier to tinker with things without wasting days at a time. The source code was organized into three main files: the endpoint scripts that users actually requested and two library files containing utility functions (called ubb_library.pl and ubb_library2.pl seriously).
After a little experience working with UBB for a few commercial clients, I got fairly involved with the message board "hacking" communitya strange group of people who spent their time trying to add functionality to existing message board software. I started a site called UBB Hackers with a guy who later went on to be a programmer for Infopop, writing the next version of UBB.
Early on, UBB had very poor concurrency because it relied on nonportable file-locking code that didn't work on Windows (one of the target platforms). If two users were replying to the same thread at the same time, the thread's datafile could become corrupted and some of the data lost. As the number of users on any single system increased, the chance for data corruption and race conditions increased. For really active systems, rendering HTML files to disk quickly bottlenecks on file I/O. The next step now seems like it should have been obvious, but at the time it wasn't.
MySQL 3 changed a lot of things in the world of web applications. Before MySQL, it wasn't as easy to use a database for storing web application data. Existing database technologies were either prohibitively expensive (Oracle), slow and difficult to work with (FileMaker), or insanely complicated to set up and maintain (PostgreSQL). With the availability of MySQL 3, things started to change. PHP 4 was just starting to get widespread acceptance and the phpMyAdmin project had been started. phpMyAdmin meant that web application developers could start working with databases without the visual design oddities of FileMaker or the arcane SQL syntax knowledge needed to drive things on the command line. I can still never remember the correct syntax for creating a table or granting access to a new user, but now I don't need to.
MySQL brought application developers concurrency we could read and write at the same time and our data would never get inadvertently corrupted. As MySQL progessed, we got even higher concurrency and massive performance, miles beyond what we could have achieved with flat files and render-to-disk techniques. With indexes, we could select data in arbitrary sets and orders without having to load it all into memory and walk the data structure. The possibilities were endless.
And they still are.
The current breed of web applications are still pushing the boundaries of what can be done in terms of scale, functionality, and interoperability. With the explosion of public APIs, the ability to combine multiple applications to create new services has made for a service-oriented culture. The API service model has shown us clear ways to architect our applications for flexibility and scale at a low cost.