MOSS/Developer Notes

From OpenUru
Jump to: navigation, search

Comments of interest to those working with the MOSS code should go here. If you plan to just run a server and not modify it, you don't need to be here.


Working with the Code

Requirements

  • Handle byte-swapping. A networking person wrote MOSS, and such people care about working with both endiannesses. Do not break it.
  • This does mean you cannot drop in someone else's code handling the MOUL protocol if it assumes you are using a little-endian machine. This is the reason for not casting incoming byte data to structs and writing out structs.
  • Including "config.h" should always look like this:
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif

Code Conventions

  • Keep C++-style comments out of preprocessor lines.
  • Keep code to 78 columns whenever possible.
  • Include (.h) files have comments at the top listing which other include files are required before including this file. (Some may be missing but I tried.)
  • I use emacs. Please keep the /* -*- c++ -*- */ at the top of .h files if they are indeed C++. Feel free to add similar magic for your editor.
  • Please match indentation, spacing, and brace styles, etc. (It is the default "gnu" style in emacs.)
  • Limit direct including of other include files in .h files. I don't remember why, but it's the project's convention. (It does help with dependencies sometimes.)

Advice

  • If you add a client message type, make sure to add it to check_useable(). Otherwise the server will toss the message and drop the client.
  • If you add a client message type, test compiling with the --enable-old-protocol=yes configure option set. You don't need to run it, just compile it. The purpose of the compile test is to make sure that your new message is not referred to when using the older protocols, unless it's supposed to be. If the new message did not exist at all in these older protocols, code referring to them won't compile, and that needs to be fixed.
  • When handling byte-swapping, never assign the result of readNNle() to a larger variable. This is wrong. Consider a 16-bit little-endian number with two bytes XY, represented as YX. read16le() returns YX, and assigned to a 32-bit number on a big-endian system, the result is 00YX. A 32-bit little-endian number of value XY is correctly represented as YX00. Assigning the result of readNN() is fine, as that value is in host order. Similarly you can't writeNNle() a number stored in a smaller type, but this is a harder mistake to make.
  • As a corollary, don't save anything explicitly kept little-endian in anything other than a sized int type. Don't use size_t, enum types, or anything along those lines. Especially not size_t.
  • Remember that protocol changes are not to be made lightly, because it causes an incompatibility problem.
  • Be very, very, careful about any changes to game server startup code in the backend. Triple-check all the logic (or more). Do not forget to take into account the client disconnecting/timing out after the request, the game server never starting, the game server shutting down as the request arrives, and everything in between.
  • Before writing new DB query code or changing how current ones work, make sure to read up on the pqxx transactor model.
  • There is a reftree.sql file in the test directory. If you are working with the DB and need a view into the trees of data the change notification routines are working with, you may find this file useful. Load it into the DB, and it adds a "reftree" function which, given a node, will generate a list of the whole tree below it by following the noderefs. The output includes node and folder type names and the information used to determine where change notifications have to be sent.

Implementation Choices

Oftentimes, in a hobbyist project such as MOSS was in the beginning, there is not a lot of design documentation. MOSS had some, but most of it became obsolete as goals changed. Here are some short descriptions answering some "Why did you do it this way?" questions.

Terminology used here: the "backend" server is that process named moss_backend, which talks to the DB and handles anything needing centralized management, while "frontend" servers are everything else (auth/file/game/gatekeeper servers talking to the client, and the dispatcher listening for connections). The "backend protocol" is the protocol used between the frontend and backend servers.

Why is there all this read32/read32le stuff?
A networking person wrote MOSS. readNN() byte-swaps a little-endian value (from the protocol on the wire) to host order. writeNN() byte-swaps from host to little-endian. If you run it on a little-endian box this is a no-op. readNNle() doesn't byte-swap, returning a number of size NN in little-endian order. If you run it on a little-endian box this is a no-op. On a big-endian box these are critical. In addition, these macros/functions handle hardware that does not allow unaligned access of memory locations. At the time of release, MOSS appeared to work correctly on big-endian machines.
Why are there two ways to write every message?
For larger messages, the writev() syscall is considered more efficient, because less data has to be copied when crossing the user/kernel memory boundary. In other words, you save one copy of the entire contents of what you write at the price of computing and copying the set of iovecs. However, writev() won't work when you have to encrypt the data first, so for encrypted connections we have to copy the data into a buffer and then encrypt it.
Why do the backend message classes have "little-endian" comments all over?
The backend protocol is handled somewhat differently in MOSS. For frontend servers, any data read in and used by the server is stored in host order. But for the backend messages, the data is mostly stored in little-endian order, because the protocol is (currently!) unencrypted and I can use writev() and skip copying into a buffer. For many messages, writev() may not be better than copying, because the iovec is bigger than the data it represents. On the other hand, the actual copy into the buffer is avoided. The performance of each approach is unknown. There is no MessageQueue support for mixing the iovec and buffer strategies in a single queue based on message type.
Why do you use a single-threaded big select loop for each game server?
The other typical strategy would be to have a thread per client and pass messages between threads. The MultiWriterMessageQueue class was originally intended to cover that case (in addition to others). But the fundamental problem with this plan is that I did not have a good portable way to wake up threads when a message is queued. It can be done, but signals are probably not the way to do it.
Speaking of signals, why do you use them between threads instead of epoll()?
I don't have epoll() on my development machine. Signals are the most portable. Replacing the use of signals + select() with libevent might be the right answer because part of its goal is to work on many OSes.
Explain the funky backend protocol numbering scheme?
The protocol was designed to facilitate using multiple processes or servers in the backend. Thus messages have a bitmask "class" which would allow them to be quickly routed to the right location by checking the bitmask, rather than having a giant case statement to decide.
Then, I wanted the pairs of messages to be recognizable from the number, so a message TO the backend is numbered the same the corresponding message FROM the backend, with the addition of a bit to mark which direction the message is going. Since each server knows its own identity, this bit is basically a big favor for the Wireshark dissector. Trust me, it's really nice.
Finally, because once upon a time I knew many messages by their number, I assigned backend messages the same number as a corresponding frontend (client) message. When it made sense. We don't need to do this going forward, but we aren't allowed to change the current ones.
There are some bits in the middle "left over". Maybe someone will need them for something creative someday!
Why did you choose PostgreSQL?
There were not a lot of differentiators between it and MySQL. But, PostgreSQL provides "triggers" -- this is a way to request that when a table is changed, the DB tells anything subscribed to the triggers about the change. I was intrigued by them, and thought they could prove useful for providing the vault node update notifications to clients (VaultNodeChanged/VaultNodeAdded/VaultNodeDeleted). Unfortunately, doing this ended up requiring at least two connections to the DB, one like the current one, and another to listen for triggers. So triggers were not used in the interest of simplicity in the backend server. Nothing else is preventing their use.
What is up with this giant DB schema!
First, it was a conscious decision to put as much SQL code as possible into stored procedures. This makes it so you can modify the DB functionality without recompiling the server, and it reduces the amount of data you have to push over the socket.
The single giant table as seen in PotS, UU, and to some extent (how much exactly is unknown) MOUL/MOULa is so hard to work with. And it has scalability, relations, and clarity built right out of it. We were thinking of how to split the database load up to help with the bottleneck the DB still proved to be in MOUL. While no implementation was done for that (realistic expectations of MOSS suggest it is not needed), this did lead to the decision to experiment with splitting the table up into many tables by node type.
There's no way to know if these design choices are more efficient, without changing the design, implementing it, and comparing the performance of the two in a like-for-like way. But it sure is a lot easier to write queries with the stored procedures and tables by node type!
What is this new "notifier" thing in the noderefs table?
It is the answer to the question "How do I know which client to tell about changes to this reference or referred-to node?"
After drawing many pictures of node trees, it became clear that the "owner", coming from the client, was mostly but not really right for this. I was left with the impression that's what the "owner" field is for... but since it wasn't what I wanted I added a new one. For player trees, the notifier is the player. For age trees, it's the age, and then the DB looks up what players are interested in that age.
I do not know how Cyan solved the problem, but this was the way to push the decision about who to tell into the DB where most of the data is. Otherwise, the backend (vault) server would have to cache all the active trees itself in order to find the answer.
Why did you make the SDL files be in a directory tree instead of a single directory?
The goal was to improve the age development experience. If the server loads all the SDL at startup, and you want to change the SDL because you have a new version to test, you have to restart the server. If instead, a given age's SDL is loaded at the time the game server starts up, you only need to restart the game server. I then put the files common to all ages in a "common" directory so that it is loaded once, and the data can be shared between game servers, but that is just a small optimization. If users have serious issues with this they can put all the SDL files in the common subdirectory.
Why .mbm and .mbam files?
I did not want the server to have to stat and checksum files all the time. File and auth files don't change much.
Why didn't you put Python in to script Heek?
I suspect Cyan did this. I am not a fan of embedding scripting where it is not needed, and at the time I had no reason to believe there would ever be new GameMgr types requiring the kind of dynamic rapid development that embedded scripting is good for. Still, it's in the project ideas for those who are into that sort of thing.

Project Ideas

The whole idea of this list is to provide some ideas for anyone wishing to dive into MOSS. Sometimes a little direction helps. See MOSS/Project_Ideas. Of course, you are not limited to this list!

See Also