Revision 725270cb

View differences:

conf/cf-lex.l
456 456
 * Grammar snippets are files (usually with extension |.Y|) contributed
457 457
 * by various BIRD modules in order to provide information about syntax of their
458 458
 * configuration and their CLI commands. Each snipped consists of several
459
 * section, each of them starting with a special keyword: |CF_HDR| for
459
 * sections, each of them starting with a special keyword: |CF_HDR| for
460 460
 * a list of |#include| directives needed by the C code, |CF_DEFINES|
461 461
 * for a list of C declarations, |CF_DECLS| for |bison| declarations
462 462
 * including keyword definitions specified as |CF_KEYWORDS|, |CF_GRAMMAR|
......
473 473
 *
474 474
 * Values of |enum| filter types can be defined using |CF_ENUM| with
475 475
 * the following parameters: name of filter type, prefix common for all
476
 * literals of this type, names of all the possible values.
476
 * literals of this type and names of all the possible values.
477 477
 */
conf/conf.c
9 9
/**
10 10
 * DOC: Configuration manager
11 11
 *
12
 * Configuration of BIRD is complex, yet straightforward. There exist three
12
 * Configuration of BIRD is complex, yet straightforward. There are three
13 13
 * modules taking care of the configuration: config manager (which takes care
14 14
 * of storage of the config information and controls switching between configs),
15 15
 * lexical analyzer and parser.
......
18 18
 * accompanied by a linear pool from which all information associated
19 19
 * with the config and pointed to by the &config structure is allocated.
20 20
 *
21
 * There can exist up four different configurations at one time: an active
21
 * There can exist up to four different configurations at one time: an active
22 22
 * one (pointed to by @config), configuration we are just switching from
23 23
 * (@old_config), one queued for the next reconfiguration (@future_config;
24 24
 * if it's non-%NULL and the user wants to reconfigure once again, we just
doc/prog-intro.sgml
6 6
design decisions and rationale behind them. It also contains documentation on
7 7
all the essential components of the system and their interfaces.
8 8

  
9
<p>Routing daemons are very complicated things which need to act in real time
9
<p>Routing daemons are complicated things which need to act in real time
10 10
to complex sequences of external events, respond correctly even to the most erroneous behavior
11 11
of their environment and still handle enormous amount of data with reasonable
12 12
speed. Due to all of this, their design is very tricky as one needs to carefully
......
47 47
<item><it>Offer powerful route filtering.</it>
48 48
There already were several attempts to incorporate route filters to a dynamic router,
49 49
but most of them have used simple sequences of filtering rules which were very inflexible
50
and hard to use for any non-trivial filters. We've decided to employ a simple loop-free
50
and hard to use for non-trivial filters. We've decided to employ a simple loop-free
51 51
programming language having access to all the route attributes and being able to
52 52
modify the most of them.
53 53

  
......
65 65
In addition to the online reconfiguration, a routing daemon should be able to communicate
66 66
with the user and with many other programs (primarily scripts used for network maintenance)
67 67
in order to make it possible to inspect contents of routing tables, status of all
68
routing protocols and also to control their behavior (i.e., it should be possible
69
to disable, enable or reset a protocol without restarting all the others). To achieve
68
routing protocols and also to control their behavior (disable, enable or reset a protocol without restarting all the others). To achieve
70 69
this, we implement a simple command-line protocol based on those used by FTP and SMTP
71 70
(that is textual commands and textual replies accompanied by a numeric code which makes
72 71
them both readable to a human and easy to recognize in software).
......
77 76
the scheduler will assign time to them in a fair enough manner. This is surely a good
78 77
solution, but we have resisted the temptation and preferred to avoid the overhead of threading
79 78
and the large number of locks involved and preferred a event driven architecture with
80
our own scheduling of events.
79
our own scheduling of events. An unpleasant consequence of such an approach
80
is that long lasting tasks must be split to more parts linked by special
81
events or timers to make the CPU available for other tasks as well.
81 82

  
82 83
</itemize>
83 84

  
......
106 107
of code modules (core, each protocol, filters) there exist a configuration
107 108
module taking care of all the related configuration stuff.
108 109

  
109
<tagp>Filters</tagp> implement the route filtering language.
110
<tagp>The filter</tagp> implements the route filtering language.
110 111

  
111 112
<tagp>Protocol modules</tagp> implement the individual routing protocols.
112 113

  
......
125 126
control over all implementation details and on the other hand enough
126 127
instruments to build the abstractions we need.
127 128

  
129
<p>The modules are statically linked to produce a single executable file
130
(except for the client which stands on its own).
131

  
128 132
<p>The building process is controlled by a set of Makefiles for GNU Make,
129 133
intermixed with several Perl and shell scripts.
130 134

  
131 135
<p>The initial configuration of the daemon, detection of system features
132 136
and selection of the right modules to include for the particular OS
133 137
and the set of protocols the user has chosen is performed by a configure
134
script created using GNU Autoconf.
138
script generated by GNU Autoconf.
135 139

  
136 140
<p>The parser of the configuration is generated by the GNU Bison.
137 141

  
138 142
<p>The documentation is generated using <file/SGMLtools/ with our own DTD
139
and mapping rules. The printed form of the documentation is first converted
143
and mapping rules which produce both an online version in HTML and
144
a neatly formatted one for printing (first converted
140 145
from SGML to &latex; and then processed by &tex; and <file/dvips/ to
141
produce a PostScript file.
146
get a PostScript file).
142 147

  
143 148
<p>The comments from C sources which form a part of the programmer's
144 149
documentation are extracted using a modified version of the <file/kernel-doc/
145 150
tool.
146 151

  
152
<p>If you want to work on BIRD, it's highly recommended to configure it
153
with a <tt/--enable-debug/ switch which enables some internal consistency
154
checks and it also links BIRD with a memory allocation checking library
155
if you have one (either <tt/efence/ or <tt/dmalloc/).
147 156

  
148 157
<!--
149 158
LocalWords:  IPv IP CLI snippets Perl Autoconf SGMLtools DTD SGML dvips
doc/tex/birddoc.sty
24 24
\advance\textheight -2 ex
25 25
%\renewcommand{\baselinestretch}{1.14}
26 26
\setcounter{tocdepth}{1}
27
\oddsidemargin 0.5 in
28
\evensidemargin 0 in
27
\oddsidemargin 0.15 in
28
\evensidemargin -0.35 in
29 29
\textwidth 6.5in
30 30

  
31 31
\def\ps@headings{\let\@mkboth\markboth
filter/filter.c
10 10
/**
11 11
 * DOC: Filters
12 12
 *
13
 * You can find sources of filters language in |filter/|
14
 * directory. |filter/config.Y| filter grammar, and basically translates
15
 * source from user into tree of &f_inst structures. These trees are
16
 * later interpreted using code in |filter/filter.c|. Filters internally
17
 * work with values/variables in struct f_val, which contains type of
18
 * value and value.
13
 * You can find sources of the filter language in |filter/|
14
 * directory. File |filter/config.Y| contains filter grammar and basically translates
15
 * the source from user into a tree of &f_inst structures. These trees are
16
 * later interpreted using code in |filter/filter.c|.
19 17
 *
20
 * Filter consists of tree of &f_inst structures, one structure per
21
 * "instruction". Each &f_inst contains code, aux value which is
22
 * usually type of data this instruction operates on, and two generic
23
 * arguments (a1, a2). Some instructions contain pointer(s) to other
24
 * instructions in their (a1, a2) fields.
18
 * A filter is represented by a tree of &f_inst structures, one structure per
19
 * "instruction". Each &f_inst contains @code, @aux value which is
20
 * usually the data type this instruction operates on and two generic
21
 * arguments (@a1, @a2). Some instructions contain pointer(s) to other
22
 * instructions in their (@a1, @a2) fields.
25 23
 *
26
 * Filters use structure &f_val for its variables. Each &f_val
27
 * contains type and value. Types are constants prefixed with %T_. Few
28
 * of types are special; %T_RETURN can be or-ed with type to indicate
29
 * that return from function/from whole filter should be
30
 * forced. Important thing about &f_val s is that they may be copied
31
 * with simple =. That's fine for all currently defined types: strings
24
 * Filters use a &f_val structure for their data. Each &f_val
25
 * contains type and value (types are constants prefixed with %T_). Few
26
 * of the types are special; %T_RETURN can be or-ed with a type to indicate
27
 * that return from a function or from the whole filter should be
28
 * forced. Important thing about &f_val's is that they may be copied
29
 * with a simple |=|. That's fine for all currently defined types: strings
32 30
 * are read-only (and therefore okay), paths are copied for each
33 31
 * operation (okay too).
34 32
 */
lib/ip.c
15 15
 * BIRD uses its own abstraction of IP address in order to share the same
16 16
 * code for both IPv4 and IPv6. IP addresses are represented as entities
17 17
 * of type &ip_addr which are never to be treated as numbers and instead
18
 * they should be manipulated using the following functions and macros.
18
 * they must be manipulated using the following functions and macros.
19 19
 */
20 20

  
21 21
/**
lib/resource.sgml
18 18
<p>We've tried to solve this problem by employing a resource tracking
19 19
system which keeps track of all the resources allocated by all the
20 20
modules of BIRD, deallocates everything automatically when a module
21
shuts down and it's is able to print out the list of resources and
21
shuts down and it is able to print out the list of resources and
22 22
the corresponding modules they are allocated by.
23 23

  
24 24
<p>Each allocated resource (from now we'll speak about allocated
nest/cli.c
35 35
 * on the current state of command processing.
36 36
 *
37 37
 * The CLI commands are declared as a part of the configuration grammar
38
 * by using the |CF_CLI| macro. When a command is received, it's processed
38
 * by using the |CF_CLI| macro. When a command is received, it is processed
39 39
 * by the same lexical analyzer and parser as used for the configuration, but
40 40
 * it's switched to a special mode by prepending a fake token to the text,
41 41
 * so that it uses only the CLI command rules. Then the parser invokes
42 42
 * an execution routine corresponding to the command, which either constructs
43
 * the whole reply and returns back or (in case it expects the reply will be long)
43
 * the whole reply and returns it back or (in case it expects the reply will be long)
44 44
 * it prints a partial reply and asks the CLI module (using the @cont hook)
45 45
 * to call it again when the output is transferred to the user.
46 46
 *
nest/proto.sgml
8 8

  
9 9
<sect1>Introduction
10 10

  
11
<p>The routing protocols are the BIRD's heart and a fine amount of code
11
<p>The routing protocols are the bird's heart and a fine amount of code
12 12
is dedicated to their management and for providing support functions to them.
13 13
(-: Actually, this is the reason why the directory with sources of the core
14 14
code is called <tt/nest/ :-).
15 15

  
16 16
<p>When talking about protocols, one need to distinguish between <em/protocols/
17 17
and protocol <em/instances/. A protocol exists exactly once, not depending on whether
18
it's configured on not and it can have an arbitrary number of instances corresponding
18
it's configured or not and it can have an arbitrary number of instances corresponding
19 19
to its "incarnations" requested by the configuration file. Each instance is completely
20 20
autonomous, has its own configuration, its own status, its own set of routes and its
21 21
own set of interfaces it works on.
......
49 49
state machine and a core state machine.
50 50

  
51 51
<p>The <em/protocol state machine/ corresponds to internal state of the protocol
52
and the protocol can alter its state whenever it wants to. There exist
52
and the protocol can alter its state whenever it wants to. There are
53 53
the following states:
54 54

  
55 55
<descrip>
......
73 73
The states are traversed according to changes of the protocol state machine, but
74 74
sometimes the transitions are delayed if the core needs to finish some actions
75 75
(for example sending of new routes to the protocol) before proceeding to the
76
new state. There exist the following core states:
76
new state. There are the following core states:
77 77

  
78 78
<descrip>
79 79
	<tag/FS_HUNGRY/ The protocol is down, it doesn't have any routes and
nest/rt-table.c
13 13
 * hold all the information about known networks, the associated routes and
14 14
 * their attributes.
15 15
 *
16
 * There exist multiple routing tables (a primary one together with any
16
 * There are multiple routing tables (a primary one together with any
17 17
 * number of secondary ones if requested by the configuration). Each table
18 18
 * is basically a FIB containing entries describing the individual
19 19
 * destination networks. For each network (represented by structure &net),
20
 * there is a one-way linked list of network entries (&rte), the first entry
21
 * on the list being the best possible one (i.e., the one we currently use
20
 * there is a one-way linked list of route entries (&rte), the first entry
21
 * on the list being the best one (i.e., the one we currently use
22 22
 * for routing), the order of the other ones is undetermined.
23 23
 *
24 24
 * The &rte contains information specific to the route (preference, protocol
proto/ospf/ospf.c
9 9
/**
10 10
 * DOC: Open Shortest Path First (OSPF)
11 11
 * 
12
 * As OSPF protocol is quite complicated and complex implemenation is
13
 * split into many files. In |ospf.c| you can find mostly interfaces
14
 * for communication with nest. (E.g. reconfiguration hooks, shutdown
15
 * and inicialisation and so on.) In |packet.c| you can find various
16
 * functions for sending and receiving generic OSPF packet. There are
17
 * also routins for autentications, checksumming. |Iface.c| contains
18
 * interface state machine, allocation and deallocation of OSPF's
19
 * interface data structures. |Neighbor.c| includes neighbor state
20
 * machine and function for election of Designed Router and Backup
21
 * Designed router. In |hello.c| there are routines for sending
22
 * and receiving hello packets as well as functions for maintaining
23
 * wait times and inactivity timer. |Lsreq.c|, |lsack.c|, |dbdes.c|
24
 * contains functions for sending and receiving link-state request,
25
 * link-state acknoledge and database description respectively.
26
 * In |lsupd.c| there are function for sending and receiving
27
 * link-state updates and also flooding algoritmus. |Topology.c| is
28
 * a place where routins for searching LSAs in link-state database,
29
 * adding and deleting them, there are also functions for originating
30
 * various types of LSA. (router lsa, net lsa, external lsa) |Rt.c|
31
 * contains routins for calculating of routing table. |Lsalib.c| is a set
32
 * of various functions for work with LSAs. (Endianity transformations,
33
 * checksum calculation etc.)
12
 * The OSPF protocol is quite complicated and its complex implemenation is
13
 * split to many files. In |ospf.c|, you can find mostly interface
14
 * for communication with the core (e.g., reconfiguration hooks, shutdown
15
 * and initialisation and so on). In |packet.c|, you can find various
16
 * functions for sending and receiving of generic OSPF packets. There are
17
 * also routines for autentication and checksumming. File |iface.c| contains
18
 * the interface state machine, allocation and deallocation of OSPF's
19
 * interface data structures. Source |neighbor.c| includes the neighbor state
20
 * machine and functions for election of Designed Router and Backup
21
 * Designed router. In |hello.c|, there are routines for sending
22
 * and receiving of hello packets as well as functions for maintaining
23
 * wait times and the inactivity timer. Files |lsreq.c|, |lsack.c|, |dbdes.c|
24
 * contain functions for sending and receiving of link-state requests,
25
 * link-state acknoledges and database descriptions respectively.
26
 * In |lsupd.c|, there are functions for sending and receiving
27
 * of link-state updates and also the flooding algorithm. Source |topology.c| is
28
 * a place where routines for searching LSA's in the link-state database,
29
 * adding and deleting them reside, there also are functions for originating
30
 * of various types of LSA's (router LSA, net LSA, external LSA). File |rt.c|
31
 * contains routines for calculating the routing table. |lsalib.c| is a set
32
 * of various functions for working with the LSA's (endianity conversions,
33
 * calculation of checksum etc.).
34 34
 *
35
 * Just one instance of protocol is able to hold LSA databases for
36
 * multiple OSPF areas and exhange routing information between
37
 * multiple neighbors and calculate routing tables. The core
38
 * structure is &proto_ospf, to which multiple &ospf_area and
39
 * &ospf_iface are connected. To &ospf_area is connected
40
 * &top_hash_graph, which is a dynamic hashing structure that
41
 * describes link-state database. It allows fast search, adding
42
 * and deleting. LSA is kept in two pieces: header and body. Both of them are
43
 * kept in endianity of CPU.
35
 * One instance of the protocol is able to hold LSA databases for
36
 * multiple OSPF areas, to exchange routing information between
37
 * multiple neighbors and to calculate the routing tables. The core
38
 * structure is &proto_ospf to which multiple &ospf_area and
39
 * &ospf_iface structures are connected. To &ospf_area is also connected
40
 * &top_hash_graph which is a dynamic hashing structure that
41
 * describes the link-state database. It allows fast search, addition
42
 * and deletion. Each LSA is kept in two pieces: header and body. Both of them are
43
 * kept in endianity of the CPU.
44 44
 * 
45
 * Every area has it's own area_disp() that is
46
 * responsible for late originating of router LSA, calcutating
47
 * of routing table and it also ages and flushes LSA database. This
45
 * Every area has its own area_disp() which is
46
 * responsible for late originating of router LSA, calculating
47
 * of the routing table and it also ages and flushes the LSA's. This
48 48
 * function is called in regular intervals.
49
 * To every &ospf_iface is connected one or more
50
 * &ospf_neighbors. This structure contains many timers and queues
51
 * for building adjacency and exchange routing messages.
49
 * To every &ospf_iface, we connect one or more
50
 * &ospf_neighbor's -- a structure containing many timers and queues
51
 * for building adjacency and for exchange of routing messages.
52 52
 *
53
 * BIRD's OSPF implementation respects RFC2328 in every detail but
54
 * some of inner function differs. RFC recommends to make a snapshot
55
 * of link-state database when new adjacency is building and send
56
 * database description packets based on information of this 
57
 * snapshot. The database can be quite large in some networks so
58
 * I rather walk through &slist structure which allows me to
59
 * continue even if actual LSA I worked on is deleted. New
60
 * LSA are added to the tail of this slist.
53
 * BIRD's OSPF implementation respects RFC2328 in every detail, but
54
 * some of internal algorithms do differ. The RFC recommends to make a snapshot
55
 * of the link-state database when a new adjacency is forming and send
56
 * the database description packets based on information of this 
57
 * snapshot. The database can be quite large in some networks, so
58
 * we rather walk through a &slist structure which allows us to
59
 * continue even if the actual LSA we were worked with is deleted. New
60
 * LSA's are added at the tail of this &slist.
61 61
 *
62
 * I also don't build another, new routing table besides the old
63
 * one because nest helps me. I simply flush all calculated and
64
 * deleted routes into nest's routing table. It's simplyfies
65
 * this process and spares memory.
62
 * We also don't keep a separate OSPF routing table, because the core
63
 * helps us by being able to recognize when a route is updated
64
 * to an identical one and it suppresses the update automatically.
65
 * Due to this, we can flush all the routes we've recalculated and
66
 * also those we're deleted to the core's routing table and the
67
 * core will take care of the rest. This simplifies the process
68
 * and conserves memory.
66 69
 */
67 70

  
68 71
#include "ospf.h"
proto/static/static.c
9 9
/**
10 10
 * DOC: Static
11 11
 *
12
 * The Static protocol is implemented in a very straightforward way. It keeps
13
 * a two lists of static routes: one containing interface routes and one
12
 * The Static protocol is implemented in a straightforward way. It keeps
13
 * two lists of static routes: one containing interface routes and one
14 14
 * holding the remaining ones. Interface routes are inserted and removed according
15
 * to interface events received from the core via the if_notify() hook, routes
15
 * to interface events received from the core via the if_notify() hook. Routes
16 16
 * pointing to a neighboring router use a sticky node in the neighbor cache
17
 * to be notified about gaining or losing the neighbor and finally special
17
 * to be notified about gaining or losing the neighbor. Special
18 18
 * routes like black holes or rejects are inserted all the time.
19 19
 *
20 20
 * The only other thing worth mentioning is that when asked for reconfiguration,
21 21
 * Static not only compares the two configurations, but it also calculates
22
 * difference between the lists of static routes mentioned in the old config
23
 * and the lists in the new one and it just inserts the newly added routes
24
 * and removes the obsolete ones.
22
 * difference between the lists of static routes and it just inserts the
23
 * newly added routes and removes the obsolete ones.
25 24
 */
26 25

  
27 26
#undef LOCAL_DEBUG
sysdep/unix/krt.c
20 20
 * a local routing table copy.
21 21
 *
22 22
 * The kernel syncer can work in three different modes (according to system config header):
23
 * Either with a single routing table and single KRT protocol [traditional Unix]
23
 * Either with a single routing table and single KRT protocol [traditional UNIX]
24 24
 * or with many routing tables and separate KRT protocols for all of them
25 25
 * or with many routing tables, but every scan including all tables, so we start
26 26
 * separate KRT protocols which cooperate with each other  [Linux 2.2].
......
33 33
 *
34 34
 * When starting up, we cheat by looking if there is another
35 35
 * KRT instance to be initialized later and performing table scan
36
 * only once for all the instances.  */
36
 * only once for all the instances.
37
 */
37 38

  
38 39
/*
39 40
 *  If you are brave enough, continue now.  You cannot say you haven't been warned.
sysdep/unix/log.c
10 10
 * DOC: Logging
11 11
 *
12 12
 * The Logging module offers a simple set of functions for writing
13
 * messages to system logs and to the debug output.
13
 * messages to system logs and to the debug output. Message classes
14
 * used by this module are described in |birdlib.h| and also in the
15
 * user's manual.
14 16
 */
15 17

  
16 18
#include <stdio.h>

Also available in: Unified diff