The challenge was to develop a model for the assets acquisition planning. In order to tackle this challenge, it was necessary to forecast the demand. For that purpose, time series techniques were used, in particular, moving averages and exponential smoothing. The results show that the seasonality does not explain all the variation of the demand, therefore it is necessary to use a model that would consider other possible explanatory variables.

According to several authors, gas consumption may be influenced by several aspects, such as, atmospheric temperatures, heliophany (a measure of the day luminosity), wind, relative humidity, rains, minimum and maximum temperatures, demand in previous periods, and prices.

The forecast of bottled propane gas sales and return rate was also addressed through multivariate linear regression. Regression models for the monthly number of bottles of types A and B were obtained, having pre- sented good percentages of explained variability with the variables under study.

The main goal of the challenge, the acquisition plan, was addressed u ing inventory models with reverse logistics. Several deterministic approaches have been considered to enable different aspects in the framework. A new inventory model has been developed to contemplate the three possible des- tinations of returned bottles: cleaning, requalification, or disposal. The models were implemented in Excel and can be tested, using PRIO estimates of holding costs and fixed setup costs, and the forecasts of sales and return rate computed previously.

The challenge proposed by EDP consists in simulating electricity prices not only for risk measures purposes but also for scenario analysis in terms of pricing and strategy. Data concerning hourly electricity prices from 2008 to 2016 was provided.

Numerous methods to deal with Electricity Price Forecasting (EPF) have been proposed and can be classified as: (i) multi-agent models, (ii) fundamental models, (iii) reduced-form models, (iv) statistical models and (v) computational intelligence models. A recent an exaustive review is presentes in [13].

During this study group different promising Statistical techniques were propose by the study group contributors: ARIMA, sARIMA, Longitudinal Models, Generalized Linear Models and Vector Autoregressive Models. In this report a GLM and a vector autoregressive model are presented and their predictive power is discussed.

In the GLM framework two different transformations were consider and for both the season of the year, month or winter/summer period revealed significant explanatory variables in the different estimated models.

On the other hand, the multivariate approach using VAR considering as exogenous variables the meteorologic season and the type of day yield a multivariate model that explains the intra-day and intra-hour dynamics of the hourly prices. Although the forecast do not exactly replicate the real price they are quite similar.

In both of the approaches here reported a more extensive work would certainly improve the proposed models.

In conclusions, EPF is a growing area that groups multiple different approaches that can be applied. In fact, other approaches from multi-agent models, fundamental models, reduced-form models and computational intelligence models, also present a great space for EPF.

An investigation into factors that may be correlated with the uncertainty lead to the observation that there are structural biases in the model. It is possible to remove these, and thereby reduce the mean square error of the predictions, but the benefit of this is apparent in the prediction of ’normal’ conditions, rather than in flood predictions.

Additionally, a tweak to the linear fit in the quantile regression is sug- gested which is better suited to the data.

Splashing and the subsequent re-entrainment of micro-droplets into the atmosphere was identified as one possible mechanism though which the area effect of a contamination could be significantly increased. The study group looked at experimentally determined splashing thresholds for droplet impacts with impermeable substrates, to determine initial predictions of whether or not a given droplet will splash. In cases where splashing occurs the droplet inertia is the most significant effect driving the initial phase of the liquid infiltration into a porous media and the study group developed a model to investigate this behaviour.

For longer time scales the study group determined that capillary suction played the most significant role in spreading the liquid within the porous medium. Models for the evolution of the partial saturation within a porous medium based on Richards’ equation were investigated. Over even longer time scales evaporation converts the liquid back into a po- tentially hazardous vapour. The study group started to incorporate evaporation into models of liquid infiltration in a porous medium in order to describe this phenomenon. Recommendations for future theo- retical, numerical and experimental modelling are also provided.

certain value. Due to drilling operations, the mechanical stresses can exceed the load bearing capacity of the rock. As the local stresses exceed certain level, a certain amount of rock is fractured into sand. Then, the sand is carried by the fluid through the wellbore depending on the flow rate. The amount of the solids can be less than a few grams per cubic meter of reservoir fluid or an essential amount. In the later case erosion of the rock and removing sufficient quantities of rock can occur. This can produce subsurface cavities which collapse and destroy the well.

When sanding is unavoidable it is necessary to estimate the characteristics of the process. Our aim was to generate a simple one-dimensional local model, which predicts the volume of sanding, the radius and the porosity of the yielded zone. Such model will help the company in the development of complex 3D models.

reservoirs is studied. Some remarks regarding sensitivity with respect to the time horizon, terminal cost and forecast of inflow are made.

In this paper we focus on the allocation process to determine the settings for each thruster that results in the minimal total power and thus fuel consumption. The mathematical formulation of this situation leads to a nonlinear optimization problem with equality and inequality constraints, which can be solved by applying Lagrange multipliers.

We give three approaches: first of all, the full problem was solved using the MATLAB fmincon routine with the solution from the linearised problem as a starting point. This implementation, with robust handling of the situations where the thrusters are overloaded, lead to promising results: an average reduction in fuel consumption of approximately two percent. However, further analysis proved useful. A second approach changes the set of variables and so reduces the number of equations. The third and last approach solves the Lagrange equations with an iterative method on the linearized Lagrange problem.

In this paper, we present a linear programming model for maximizing the amount

of decentralized power generation while respecting the load limitations of the

network.

We describe a prototype showing that for an example network the maximization

problem can be solved efficiently. We also modeled the case were the power consumption and decentralized power generation are considered as stochastic variables, which is inherently more complex.

energy storage system comprising Energy stores (batteries) placed at

consumer level (in customer’s homes). The aim is to flatten consumer

demand and make better use of home-based generation. The Study

Group considered the mechanism of connecting batteries to the local

distribution system, the ability to meet engineering requirements for the

standard of the connection, and the potential impact of large numbers of

such connections on stability of the local distribution network. Network

and (DC-AC) invertor models were used to examine network connection

transients. A statistical model was proposed to estimate the distribution

of key electrical parameters to determine the likelihood of engineering

standards being exceeded. The Study Group also considered stochastic

methods of modelling wind speed, to better understand the requirements

for battery energy storage as a complement to wind power.

is applied to the RUSAL Aughinish Alumina digester area.

1) 1968-1971

2) 1974-1977

3) 1978-1980

4) 1980-1983

5) 1984-1986

6) 1987-1988

We have obtained an exact mathematical description of a geoseismic signal propagating through an anisotropic medium using a constant coefficient wave equation as the basic model. This model captures exactly the elliptical velocity profile required in the formulation of the geophysical model from which we obtained exact formulas describing the travel-time through a two layer geological structure, and an exact inversion formula for computing the anisotropic velocity parameter (gamma). A robust numerical method based on a minimization technique was presented as an accurate method of computing both travel-time and the inverted gamma.

The exact formulas and robust numerical methods are significant improvements over the approximations and root finding methods discussed in the background material, and we note our formulation is no more difficult than these background methods.

We derived asymptotic formulas valid for the near vertical case, which describe accurately the high sensitivity of gamma to the input parameters in this case. Our numerical work also confirms this sensitivity, even using exact formulas and robust numerical methods.

We conclude that the computation of the anisotropic velocity parameter (gamma) for the given physical measurements from a series of surface signals and single borehole receiver is intrinsically unstable. By changing to the alpha,beta velocity parameter space, we obtain an inversion method that is much less sensitive to input errors. For certain geophysical problems, the alpha,beta parameters may suffice for an accurate description of the material.

When the anisotropic velocity parameter (gamma) is needed directly, a different measurement technique is required. This route will require further investigation, and we have proposed a number of promising possibilities involving a differential time measure.

Two problems on flows in low permeability reservoirs were posed. One of the problems is on radial axi-symmetric flows with a threshold pressure gradient and the other is on radial flows in a compressible medium. The main objective of the exercise is to obtain exact or approximate solutions. We summarize the discussion on one of the two problems, flows in a slightly compressible medium.

As the unique power supplier in Huizhou (China), Huizhou Electric Power wants to know the solution to the problems: 1. Prediction of the total electrical consumption and the peak load of the city in 2006 based on the economy development and the feature of the city. 2. Monthly prediction of the consumption and peak load in 2006. 3. Daily prediction of the consumption and peak load from July 10th to 16th in 2006. 4. Prediction of the load every 15 minutes of July 10th. 5. Real-time forecasting which means to amend the existing load prediction for next 15 minute.

The jet blending process was investigated in a number of ways at the Study Group. These included: simple estimates for blending times, theoretical and experimental description of jet behaviour, development of a simple compartment model for the blending process, and several large scale computer simulations of the jet-induced motion using a commercial Computational Fluid Dynamics package. In addition, the sedimentation of contaminant particles in the tanks was investigated. This overall investigation, using a variety of approaches, gave a good knowledge of the blending process.

The dust layer, if not extremely reactive, might cause failure of the cable by overheating since the extra insulation of the dust layer is not allowed for in standard tables, nor is the heat generated by normal decomposition of the dust.

One can thus envisage two extreme types of failure, ignition of the dust before cable failure and cable failure before the dust ignites due to its insulating and thermogenetic properties.

The primary question raised was whether any effects of this type could occur for a reasonable thickness of dust layer; i.e. would it be km, m or cm for a reasonable cable installation?

It is found that the volume fraction has the strongest impact on the effective permittivity, linear at first but higher order at higher volume fractions. The aspect ratio of the inclusions has a moderate effect, which is exaggerated in the extreme case of needle-like inclusions, and which can also be seen in a stronger nonlinearity. There is also a possibility that some features in the shape of the inclusion boundaries may influence the frequency dependence of the effective permittivity. Inclusion size and sharp edges have negligible effect.

Once established most of the subsequent fluid flows out of the fractures as a result of their high permeability, thus inhibiting further cracking. This problem is resolved by injecting a containing particles and fibres, to plug the fractures temporarily and allow another cycle of fracturing fluid to stimulate a different region of the oil well. Finally the plugs, termed are removed and oil production in the well commences.

In this report we investigate how the size and shape distributions, concentration, and material properties of particles and fibres in the diverter fluid affect the formation of the filtercakes. We consider how these properties may be engineered to maximize clogging in fractures and cavities, as near to the bore hole as possible, whilst minimizing the amount of material wasted.

The problem is approached by considering the two distinct aspects of the behaviour, namely the flow of particulates from the bore into the fracture, and the fracture clogging. A series of mathematical models are employed that elucidate the system behaviour, allowing us to offer guidance on the appropriate choice for the design parameters to optimize clogging.

Schlumberger is interested in determining how to use the rotating disk experiments to extract parameters that govern the reaction rate between the acid and carbonate rock. For mass transfer limited reactions these include (i) the diffusion rate across the boundary layer, and (ii) the thickness of the boundary layer. For a reaction that is surface limited, (i) the reaction rate and, (ii) the reaction order are of paramount interest.

The Study Group began by reanalyzing the solution by Levitch coupled with numerical solutions of the flow in the hope that it would lead to a deeper understanding of the fluid dynamics in the neighbourhood of the rock. In particular, how the fluid flow changes as the Reynolds number is increased and how this might indicate the most ideal location to measure the calcium in the reaction vessel.

The modelling looked not only at the coupling of the fluid flow with the diffusion equation for the ions but also a preliminary Stefan problem for the dissolving rock.

The state-of-the-art methods use optimisation to find the seismic properties of the rocks, such that when used as the coefficients of the equations of a model, the measurements are reproduced as closely as possible. This process requires regularisation if one is to avoid instability. The approach can produce a realistic image but does not account for uncertainty arising, in general, from the existence of many different patterns of properties that also reproduce the measurements.

In the Study Group a formulation of the problem was developed, based upon the principles of Bayesian statistics. First the state-of-the-art optimisation method was shown to be a special case of the Bayesian formulation. This result immediately provides insight into the most appropriate regularisation methods. Then a practical implementation of a sequential sampling algorithm, using forms of the Ensemble Kalman Filter, was devised and explored.

The study group based its approach on a video produced by Pacific Northwest of a laboratory experiment in which gas was generated in a yield material. In these experiments oxygen was produced from hydrogen peroxide, and the material was a clay suspension. The video showed bubbles growing in the material, and the height of the sample rising, rather like baking bread. After some time, some bubbles were large enough to overlap, and they merged. The result of several mergers was to form cracks, fairly horizontal, which grew by being inflated by gas and then breaking sideways into a nearby bubble. A model of this crack growth is given in section 3.1. Gas was released to the surface from the network of cracks.

Nuclear Electric currently have complex computational models of how plants will behave under these conditions, which allows them to compute plant data (e.g., reactor temperatures) from given grid frequency data. One approach to damage assessment would require several years'-worth of real grid data to be fed into this model and the corresponding damage computed (via "cycle distributions" created by their damage experts). The results of this analysis would demonstrate one of three possibilities: the damage may be acceptable under all reasonable operating conditions; or it may be acceptable except in the case of an exceptional abrupt change in grid frequency (caused by power transmission line failure, or another power station suddenly going off-line, for instance), in which case some kind of backup supply (e.g., gas boilers) would be required; or it may simply be unacceptable.

However, their current model runs in approximately real time, making it inappropriate for such a large amount of data: our problem was to suggest alternative approaches. Specifically, we were asked the following questions:

- Can component damage be reliably estimated directly from cycle distributions of grid frequency? i.e., are there maps from frequency cycle distributions to plant parameter cycle distributions?

- Can a simple model of plant dynamics be used to assess the potential for such maps?

- What methods can be used to select representative samples of grid frequency behaviour?

- What weightings should be applied to the selections?

- Is it possible to construct a "cycle transform" (Fourier transform) which will capture the essential features of grid frequency and which can then be inverted to generate simulated frequency transients?

We did not consider this last question, other than to say "probably not".

We were supplied with data of the actual grid frequency measurements for the evening of 29/7/95, and the corresponding plant responses (obtained using Nuclear Electric's current computational model). A simplified nonlinear mathematical model of the plant was also provided.

Two main approaches were considered: statistical prediction and analytical modelling via a reduction of the simplified plant model.

Critical to the success of the circuit breaker is that it is designed to cause the arc to move away from the contacts, into a widening wedge-shaped region. This lengthens the arc, and then moves it onto a series of separator plates called an arc divider or splitter.

The arc divider raises the voltage required to sustain the arcs across it, above the voltage that is provided across the breaker, so that the circuit is broken and the arcing dies away. This entire process occurs in milliseconds, and is usually associated with a sound like an explosion and a bright ash from the arc. Parts of the contacts and the arc divider may melt and/or vapourise.

The question to be addressed by the Study Group was to mathematically model the arc motion and extinction, with the overall aim of an improved understanding that would help the design of a better circuit breaker.

Further discussion indicated that two key mechanisms are believed to contribute to the movement of the arc away from the contacts, one being self-magnetism (where the magnetic field associated with the arc and surrounding circuitry acts to push it towards the arc

divider), and the other being air flow (where expansion of air combined with the design of the chamber enclosing the arc causes gas flow towards the arc divider).

Further discussion also indicated that a key aspect of circuit breaker design was that it is desirable to have as fast a quenching of the arc as possible, that is, the faster the circuit breaker can act to stop current flow, the better. The relative importance of magnetic and air pressure effects on quenching speed is of central interest to circuit design.

(1) Further partitioning of output load and prices from an ESE into off-peak, peak and weekend periods to determine the subsequent effect on earnings.

(2) The diagnosis of simulated load paths. As simulated load was not supplied for all engines, the diagnostics developed in this report did not include an analysis of load.

(3) The building of a response surface to capture the interaction between temperature, load and price.

(4) Examination of the convergence behaviour of an ESE. Convergence in this context means the determination of the minimum number of load and price paths required from a simulator in order to return expected profiles that conform to industry expectations. This would involve the sequential testing of an increasing number of simulated paths from an ESE in order to determine the number required.

In conclusion, it is important to understand that each of the simulators that were diagnosed in this study were criticised according to industry expectations, and to the degree that the diagnostics employed here reflect those expectations. In fact, all simulators will attract criticism given that they are calibrated on historical data and are expected to generate future prices for market conditions that are unknown. The mark of an appropriate ESE is that the future load and pricing structure it generates is not too much at variance with industry expectations. A critical function of a simulator is for it not to overestimate or underestimate load and prices such that the risk metrics used to govern earnings risk faced by an electricity retailer are compromised to the extent that their book is either grossly over-hedged or under-hedged.

For the first, we used the method of multiple scales to homogenize this model over the microstructure, formed by the small lithium particles in the electrodes.

For the second, we gave rigorous bounds for the effective electrochemical conductivity for a linearized case.

We expect similar results and bounds for the "full nonlinear problem" because variational results are generally not adversely affected by a sinh term.

Finally we used the asymptotic methods, based on parameters estimated from the literature, to attain a greatly simplified one-dimensional version of the original homogenized model. This simplified model accounts for the fact that diffusion of lithium atoms within individual electrode particles is relatively much faster than that of lithium ions across the whole cell so that lithium ion diffusion is what limits the performance of the battery. However, since most of the potential drop occurs across the Debye layers surrounding each electrode particle, lithium ion diffusion only significantly affects cell performance if there is more or less complete depletion of lithium ions in some region of the electrolyte which causes a break in the current flowing across the cell. This causes catastrophic failure. Providing such failure does not occur the potential drop across the cell is determined by the concentration of lithium atoms in the electrode particles. Within each electrode lithium atom concentration is, to leading order, a function of time only and not of position within the electrode. The depletion of electrode lithium atom concentration is directly proportional to the current being drawn off the cell. This leads one to expect that the potential of the cell gradually drops as current is drawn of it.

We would like to emphasize that all the homogenization methods employed in this work give a systematic approach for investigating the effect that changes in the microstructure have on the behaviour of the battery. However, due to lack of time, we have not used this method to investigate particular particle geometries.

A key part of the device currently under investigation is an “interference filter”. This consists of a large number (say around 60) of thin layers (the whole device is about 2 μm thick) with differing refractive indices. The combination of multiple reflections and refractions, with associated change of phase, results in interference so that light of certain frequencies is mostly reflected back while other light is mostly transmitted through to the other side of the filter. In the ideal case, all the good frequencies will pass through the filter to the TPV cell lying behind it while the bad ones are reflected away:

- overall reflection coefficient (by power) = 1 for λ < λ_g , 0 for λ > λ_g ;

- overall transmission coefficient (by power) = 0 for λ < λg , 1 for λ > λg .

Unfortunately, because of the effect of direction on optical lengths and hence on the level of interference, the transmission and reflection coefficients depend strongly upon the angle of incidence. For black-body radiation the dependence of the normal component of power upon the angle of incidence α exhibits a maximum at α = π/4, although, of course, the power density (with respect to angle) is positive for all 0 < α < π/2. This means that – assuming that black-body radiation is impinging on the TPV cell – it makes sense to try to optimize the filter’s performance for α = π/4, but then most radiation will be incident along sub-optimal directions. Indeed, because the optical lengths inside the filter are essentially given by cos α (or sec α), the effectivenesss of the filter depends least strongly upon angle for α near 0 (normal incidence). For this reason attention must also be focused on trying to direct the light from the radiator. With light travelling in (roughly) one direction, it is then possible to orient the cells so that light falls on them normally and the interference filters should be designed to work optimally with α = 0 (or small).

A second design improvement is also worthy of mention. It was raised during our deliberations, but constraints on time prevented us from further examining this research topic. The black-body radiation spectrum is not immutable; it is modified by using a multiple dielectric layer design for the emitter. This has the advantage of controlling both the angular emission pattern and the emission spectrum.

The study group concluded that while ‘prediction’ of price in any meaningful sense was not viable, a model for scenario analysis could be realised. The model did not incorporate all of the factors of interest, but did model important time lags in the response of market players’ future behaviour to current oil prices. Consideration of the optimisation of supply through new capacity in the telecoms industry led to a generalisation of the standard Cournot-Nash equilibrium. This indicates how an output-constrained competitive market might operate. It enables identification of different pricing regimes determined by the level of competition and the resource limitations of particular supplier firms. Two models were developed sufficiently to enable simulation of various conditions and events. The first modelled oil price as a mean reverting Brownian motion process. Strategies and scenarios were included in the model and realistic simulations were produced. The second approach used stability analysis of an appropriate time-delayed differential equation. This enabled the identification of unstable conditions and the realisation of price oscillations which depended on the demand scenarios.

This report describes some initial models, two of which are developed in more detail: one for the propagation of torsional waves along the drill string and their reflection from contact points with the well bore; and one for the dynamic coupling between the underreamer and the drill bit during drilling.

The tank water has a large thermal capacity and National Grid wishes to investigate whether circulation of the tank water without external heating could provide sufficient energy input to avoid freezing. Only tanks in which the tank water is below ground are investigated in the report. The soil temperature under the reservoir at depth of 10m and lower is almost constant.

In order to recover this information from NMR spectra the company must have an effective, efficient and robust algorithm to perform inversion from the dataset to the unknown probability distribution on magnetic relaxation times. This ill-posed problem is encountered in diverse areas of magnetic imaging and there does not appear to be an ‘off-the-shelf’ solution which the company can apply to its problem. Company scientists have developed a sophisticated algorithm which performs well on some simple test datasets, but they are interested in knowing if there are simpler approaches which could work effectively, or if some limited but useful properties of the density are accessible with a totally different approach.

Our report is organised as follows. In Section 1.2 we present a careful and complete description of the problem and the work already done by the company. In Section 1.3 we discuss Truncated Singular Value Regularisation and Tikhonov Regularisation and show how some ‘off-the-shelf’ Matlab code may be used to good effect on the test datasets provided by the company. In Section 1.4 we show that one can incorporate higher order regularisation into the company’s existing algorithm, answering one specific question raised at the beginning of the workshop. Finally, in Section 1.5 we record our unsuccessful attempt to establish an iterative algorithm for the positively constrained inversion. Finally in the last section we review our conclusions and make suggestions for future work.

The reason for using the electrical current is that “flushing” the soil using water alone is not effective for removing the contaminants. By heating up the soil and vaporizing the contaminated liquid, it is anticipated that rate of extraction will increase as long as the recondensation is not significant. A major concern, therefore, is whether recondensation will occur. Intuitively, one might speculate that liquid phase may dominate near the injection well. Moving away from the injection site towards the extraction well, due to the combined effects of lower pressure and higher temperature (from heating), phase change occurs and a mixture of vapour and liquid may co-exist. There may also be a vapour-only region, depending on the values of temperature, pressure, and other parameters. In the two-phase zone, since vapour bubbles tend to rise due to the buoyancy force, and the temperature decreases along the vertical path of the bubbles out of the heated region, it is possible that the bubbles will recondense before reaching the extraction well. As a consequence, the probability exists that part of the contaminants stay in the soil. Obviously, to predict transition between single-phase and two-phase regions and to understand the transport phenomenon in detail, a thermal capillary two-phase flow model is needed. However, to simplify the problem, here we only consider the case when two-phases co-exist in the entire region.

Our goal is to produce a single source-single receiver model which uses modern seismic measurements to determine the elastic moduli of the lower media. Once known, geoscientists could better describe the angular dependence of the velocity in the layer of interest and also would have some clues at to the actual material composing it.

The problem presented by Husky Energy concerns seismic attenuation: the loss of energy as a seismic wave propagates through the earth. As an exploration tool, attenuation effects have only recently attracted attention. These effects can prove useful in two ways: as a means of correcting seismic data to enhance resolution of standard imaging techniques, and as a direct hydrocarbon indicator. Theoretically, a subsurface reservoir full of hydrocarbons will tend to be acoustically softer than a porous rock filled only with water; Kumar et al show that attenuation is highest in a partially fluid-saturated rock.

Many physical processes can lead to the attenuation of a seismic trace. In the present work, we ignore attenuation effects such as spherical divergence or scattering, and concentrate on intrinsic attenuation effects exclusively. The latter are caused by friction, particularly in porous rocks between fluid and solid particles.

The goal of the workshop was to find a means of computing seismic attenuation from relatively short windows of seismic imaging data, and particularly be able to identify regions of anomalous attenuation.

The proposed solution consists of three main steps: (1) Segmentation of Data, (2) Curve fitting, and (3) a Decision Process. Segmentation of Data attempts to identify intervals in the data where a single trend is dominant. A curve from an appropriate family of functions is then fitted to this interval of data. The Decision Process gauges the quality of the trends identified and either formulates a final answer or, if the program cannot come to a reliable answer, '

flags' the well to be looked at by an operator.

We begin with an overview of the physics of flow in water distributor networks, and the implementation used in the EPANET flow solver. The bulk of our results from the workshop are concerned with analysing simple flow networks and using the results to draw conclusions about the well-posedeness of both the forward and inverse problems.

The model where is the harmonic average of the mixing coefficients of the two pure fluids is analysed in detail, since this is likely to be a good approximation when the density difference between the two fluids is small.

When the density difference is large, the laminar flow regime fingering will occur and there will be a relatively sharp interface between the fluids. However, in the turbulent case, as gravity drives the denser fluid into the less dense one the invading fluid is immediately mixed by turbulent diffusion. This means that sharp interfaces do not exist. Instead there will be a finite mixing region where the volume fraction of each fluid changes from to . In this case will depend upon the relative concentration of the fluids. This approach leads to a degenerate diffusion problem.

Three subgroups were formed, and each developed a different approach for solving the problem. These were the Portfolio Selection Algorithm Approach, the Statistical Inference Approach, and the Integer Programming Approach.

The first part of the report summarizes the discussions and the models proposed for a four-well problem. Because the models are non-linear, one of their major draw-backs is that they quickly become very computationally expensive, and are impractical for the number of wells at Cold Lake.

The second part of the report discusses a new approach to the problem, where it has been formulated as a linear programming problem, and its size is independent on the number of wells. Results for a test case are presented.

The purpose of this paper is to carefully analyse each of the physical processes in the full system of partial differential equations describing the problem and by making some basic assumptions, derive a simple set of equations that captures the main features of the system solved with an expensive CFD program.

We find considerable qualitative agreement between numerical results of the simplified model and those of the CFD program, which is quite remarkable considering their relative complexities.

Due to these low-emission environmental requirements in California, solutions must be implemented that do not entail release of these vapors into the atmosphere. One solution requires that the vapors fill a balloon during the appropriate times. However, the size of the balloon at typical inflation rates requires a significant amount of physical space (approximately 1000-2000 liters), which may not necessarily be available at filling stations in urban areas. Veeder-Root has a patent pending for a system to compress the vapors that are released to a 10:1 ratio, store this compressed vapor in a small storage tank, and then return the vapors to the original underground fuel tank when the conditions are thermodynamically appropriate (see Figure 1 for the schematic representation of this system).

The limitation of the compressor, however, is that the compression phase must take place below the ignition temperature of the vapor. For a 10:1 compression ratio, however, the adiabatic temperature rise of a vapor would be above the ignition temperature. Mathematical modeling is necessary here to estimate the

performance of the compressor, and to suggest paths in design for improvement.

This report starts with a mathematical formulation of an ideal compressor, and uses the anticipated geometry of the compressor to state a simplified set of partial differential equations. The adiabatic case is then considered, assuming that the temporary storage tank is kept at a constant temperature. Next, the

heat transfer from the compression chamber through the compressor walls is incorporated into the model.

Finally, we consider the case near the valve wall, which is subject to the maximum temperature rise over the estimated 10,000 cycles that will be necessary for the process to occur. We find that for adiabatic conditions, there is a hot spot close to the wall where the vapor temperature can exceed the wall temperature. Lastly, we discuss the implications of our analysis, and its limitations.

In the process of constructing the default tariff, IPART assumes that the cost of purchasing energy is equal for all retailers. IPART also makes no allowance for hedging costs, which will vary depending upon the NSLP of the electricity retailer. If one retailer has more NSLP volatility than other retailers, their hedging costs for default customers will increase. Under the current default tariff structure, these increased hedging costs become an unrecoverable expense.

The aim of this project is to explore the volatility of Integral Energy’s NSLP, relative to that of other retailers, with a view toward developing a risk multiplier that accurately and reliably quantifies the volatility differences between NSLPs.

Figure 1: National electricity consumption figures from the winter of 2003/04. Diamonds indicate daily levels; squares indicate dates on which British Energy issued Triad warnings; and circles indicate those dates on which a Triad was, at the end of the winter, declared to have occurred.

British Energy supplies electricity to large industrial and commercial customers. Charges are based on the Transmission Network Use of System (TNUoS) Charges which are levied by National Grid and passed through to British Energy’s customers. TNUoS includes a substantial surcharge based on each customer’s usage during the Triad periods: the higher a customer’s usage during Triads, the higher their overall electricity supply costs.

British Energy, in common with other large commercial electricity suppliers, aims to reduce National Grid charges for some of its customers by issuing a “Triad warning” on days when it seems possible that a Triad might later be found to have occurred. Industrial customers are often encouraged by their contract with British Energy to reduce their consumption on those warning days. British Energy are restricted by the contract to issuing a limited number of calls over the entire winter period, anything up to 23.

A customer who receives a Triad warning may, or may not, take action (e.g. shutting down their factory early that day). Some – but not all – customers are contractually required to inform British Energy whether or not they are reducing their consumption. Therefore British Energy cannot be sure by how much overall consumption will drop if a warning is issued. Furthermore, the national consumption is also affected by suppliers other than British Energy who may also have issued a Triad warning.

It is thought that this system may suffer from “negative feedback”. That is, on a day that is likely to include a Triad period because it has high predicted consumption, many suppliers will issue warnings, resulting in many customers reducing their actual consumption, ensuring that the total national demand is actually much lower than predicted: sufficiently low, in fact, that no Triad occurs, and no warning was actually necessary. It is expensive for customers to take action when it is not actually required.

British Energy currently uses the deterministic half-hourly consumption forecast issued by National Grid and the recent and forecast temperatures at hourly intervals at 7 locations around the country to decide on a daily basis whether to issue a Triad warning. The decision-making tool, called TriFoS, does not currently take into account the possible problem of negative feedback.

The Study Group was asked to consider ways of compensating for negative feedback. Section 2 confirms that feedback is a statistically significant phenomenon. Sections 3 and 4 then use the so-called ‘Full-Information Secretary Problem’ to motivate the derivation of possible triad-calling strategies. Section 5 examines the possibilities for issuing triad warnings from analysing historical data.

the way down to the pond may then be determined.

This project used data from a wind farm and three meteorological stations to determine methods and ability to predict wind speed. Analyses using regression, neural networks, and a Kalman filter were examined. Prediction using a combination of local wind measure-ments and meteorological data appears to give the best results.