We divided the problem into three subtasks. The first task is to develop a load plan for a train based on the expected mix of containers. An ideal plan specifies positions on the train for various container types in a way that minimises the number of wagons required while meeting a number of packing, safety and aerodynamic constraints. The second task is truck dispatching - when a truck arrives at the terminal, where should it be sent? This problem is not straightforward because trucks arrive randomly and loading and unloading occur simultaneously, and so the ideal position for a container may not yet be available. The third task is to dispatch the stackers that transfer containers between trains and trucks in such a way that truck waiting time is minimised.

The train planning problem can be formulated as an integer programming problem, but it can also be solved using the OASIS software already owned by National Rail Corporation. Train plans can be made more flexible by using container classes rather than specific container details.

The study group suggested several truck dispatch schemes, but did not have sufficient data to evaluate the schemes.

Due to repeated unloadings of the weight on the bearing during oscillations, the bearing collar may slowly slip against the axle box wall. Although our calculations show that abrasive wear due to this slippage is negligible, the calculation raises general principles that apply to other possible wear mechanisms. If lifetime is proportional to hardness, we can estimate relative lifetimes of refurbished and new boxes. Although the resleeve material is softer than the original, the cost to lifetime ratio would favour refurbishment under this assumption.

Important unanswered questions are identified and a specific integrated program of field, laboratory, and theoretical study is suggested.

Existing design methods use purely conductive models of heat transport. We investigate the relevance of convection in the cooling feeder, and set up a boundary-layer model of flow driven by density differences. We find that convection is a significant factor in the design of a feeder, effectively maintaining constant temperature across it. The height of the feeder is important mainly in providing the driving force for this flow.

Two problems on flows in low permeability reservoirs were posed. One of the problems is on radial axi-symmetric flows with a threshold pressure gradient and the other is on radial flows in a compressible medium. The main objective of the exercise is to obtain exact or approximate solutions. We summarize the discussion on one of the two problems, flows in a slightly compressible medium.

We quantify the quality of service towards the clients of this facility based on a service level agreement between the two parts: the web hosting provider and the client. We assume that the client has the knowledge and resources to quantify its needs. Based on these quantifications, which in our model become parameters, the provider can establish a service offer. In our model, this offer covers the quality of service and the price options for it.

The essence of the problem therefore is: given a set of traces that are expected to be representative of common use, we must rearrange the files on the disk so that the performance is optimized.

Programs called disk defragmenters use these simple principles to rearrange data records on a disk so that each file is contiguous, with no holes or few holes between data records. Some more sophisticated disk defragmenters also try to place related files near each other, usually based on simple static structure rather than a dynamic analysis of the accesses. We are interested in more dynamic defragmentation procedures.

We first consider a 1D model of the disk. We then look at the results from an investigation of the 2D disk model followed by a discussion of caching strategies. Finally we list some of the complications that may need to be addressed in order to make the models more realistic.

Many aspects of this process have been investigated to gain a greater insight of the physical processes involved. We begin with the heat problem, first as a one dimensional model, then extending to a second dimension. This analysis indicates

that the temperature of the gas surrounding the crystal has a major impact on both the thermal stress experienced by the crystal and the shape of the crystal/melt interface. In contrast,variations in the heat

flux from the melt have much less of an effect.

Having investigated the temperature profiles, the analysis then focuses on the behaviour of the fluid. Scaling arguments are used to estimate the thickness of the various boundary layers and explain the main flow patterns that are experimentally observed.

Next, the shape of the meniscus is determined for various rotation rates. This analysis shows that the shape of the meniscus is relatively invariant at least at low rotation rates yet the actual vertical position of the meniscus changes readily with the rate of rotation.

After analyzing the fluid flow patterns, a model is developed for the height of the melt as a function of time. This indicates that for a crystal of constant radius the proportion of the effective pull rate due to the falling fluid level remains essentially constant over the complete growing time of the crystal. This no longer remains true if the radius of the crystal is allowed to increase at a constant rate.

T. D. Parsons, who was then at Pennsylvania State University, was approached in 1977 by some local spelunkers who asked his aid in optimizing a search for someone lost in a cave in Pennsylvania. Parsons quickly formulated the problem as a search problem in a graph. Subsequent papers led to two divergent problems. One problem dealt with searching under assumptions of fairly extensive information, while the other problem dealt with searching under assumptions of essentially zero information. These two topics are developed in the next two sections.

The proposed solution consists of three main steps: (1) Segmentation of Data, (2) Curve fitting, and (3) a Decision Process. Segmentation of Data attempts to identify intervals in the data where a single trend is dominant. A curve from an appropriate family of functions is then fitted to this interval of data. The Decision Process gauges the quality of the trends identified and either formulates a final answer or, if the program cannot come to a reliable answer, '

flags' the well to be looked at by an operator.

The essential goal of the credit institution is to minimize their losses due to default. By default we mean any event causing an asset to stop producing income. This can be the closure of a stock as well as the inability of an obligor to pay their debt, or even an obligor's decision to pay out all his debt.

Minimizing the combined losses of a credit portfolio is not a deterministic problem with one clean solution. The large number of factors influencing each obligor, different market sectors, their interactions and trends, etc. are more commonly dealt with in terms of statistical measures. Such include the expectation of return and the volatility of each asset associated with a given time horizon.

In this sense, we consider in the following the expected loss and risk associated with the assets in a credit portfolio over a given time horizon of (typically) 10 to 30 years. We use a Monte Carlo approach to simulate the loss of a portfolio in multiple scenarios, which leads to a distribution function for the expected loss of the portfolio over that time horizon. Second, we compare the results of the simulation to a Gaussian approximation obtained via the Lindeberg-Feller Theorem. Consistent with our expectations, the Gaussian approximation compares well with a Monte Carlo simulation in case of a portfolio of very risky assets.

Using a model which produces a distribution of expected losses allows credit institutions to estimate their maximum expected loss with a certain confidence interval. This in turn helps in taking important decisions about whether to grant credit to an obligor, to exercise options or otherwise take advantage of sophisticated securities to minimize losses. Ultimately, this leads to the process of credit risk management.