BRITISH ARTILLERY FIRE CONTROL

 CALIBRATION

Updated 4 May 2014

 

 

 

CONTENTS

 

INTRODUCTION

 

MV COMPONENTS AND INFLUENCES

 

WORLD WAR 1

 

BETWEEN THE WARS

 

WORLD WAR 2

 

AFTER WORLD WAR 2

 

 INTRODUCTION

This page describes how British thinking and practices for artillery calibration evolved during the 20th Century.  Calibration is used to determine the muzzle velocity (MV) of guns.  This information is essential for accurate predicted fire.

Calibration determines the actual MV of a gun, the difference between this and the standard MV is applied as a correction to the range in a gun's firing data.  The main cause of MV change is barrel wear.  MVs vary slightly each time a shell is fired, so calibration produces a mean MV from a series of shells.  Very small variations in ammunition mostly cause this round to round variation,  which causes dispersion of a gun's fall of shot around its mean point of impact.  This round to round dispersion is not corrected by calibration and is a significant component of the Probable Error in range.  However, barrel wear also increases dispersion because it affects the stability of the shell in flight, which reduces its aeroballistic efficiency as well as its carrying power.  Nevertheless there are other factors that affect MV and not all of them have been identified and proven although there are various theories.

An inaccurate MV has several possible effects.  For an individual gun its fire will be inaccurate because its mean point of impact will be in the wrong place.  However, it's highly unlikely that all guns of a battery, regiment, etc, have their MVs wrong by the same amount. Furthermore, this variation will not necessarily be in the same direction (ie all fast or all slow).  This may mean that the massed fire is accurate, its mean point of impact is in the right place, but it is more dispersed - the guns don't 'shoot together'.  Nevertheless it's also likely that if guns have fired a lot since they were last calibrated then their individual MVs will generally all be slower so the collective mean point of impact will be inaccurate as well as dispersed.

Barrels, including chambers, erode with use from the effects of heat and chemical action on the metal surface and abrasive effects of the hot propelling gases and the shell’s driving band.  The primary causes of this erosion are the heat and pressure of the gases produced by the burning propellant.  These effects can be exacerbated by the metallurgy of barrel and driving band.  As barrels wear their muzzle velocity (MV) reduces.  However, over time each gun fires a different amount and with different size propelling charges so the MVs of guns all vary, although guns that have fired a similar amount will have similar MVs. The amount of wear and MV are closely correlated.  This MV loss has two main causes:

The first cause has another MV losing effect, particularly towards the end of a barrel's life - gas leaks around the driving band resulting in loss of gas pressure.  The erosion per round fired can increase if a gun fires at a high rate of fire for a period.  This is one of the reasons for modern guns being restricted to only a few minutes firing at an intense rate.

Chamber pressure is important because the rate at which propellant burns is generally proportional to pressure, lower pressure means the position of 'all-burnt' is further forward so the total force exerted on the shell is reduced.

However, while barrels may wear at a constant rate related to the number of shells and different charges fired, MVs also have change patterns of their own.  These include:

The process of determining actual MV was and is called ‘calibration’ and this meaning is used on this page.  However, in the earlier years and into World War 1 ( WW1) the term was sometimes used to include correction for variations from standard conditions used for map shooting (later called predicted fire).

Using accurate MVs was and is important for two reasons:

Range Tables (and Firing Tables) are compiled for a ‘standard MV’ for each charge.  British practice was that the standard MV for a charge represented a value that is about halfway through the first quarter of a barrel’s life, with the selected MV being based on measured MVs of several new guns.  Some nations set the standard MV by other methods and used the hump MV as the standard MV, which has the effect of inflating the firing table maximum range and looks better in the sales brochure.  

Of course there are measures to reduce barrel wear.   Using very cool burning propellants isn't usually a popular option for field artillery because these are bulkier than hotter burning ones and required bigger chambers, which increases gun weight (although UK developed cooler 'triple base' propellants in the 1930's that were about 15% larger than the older and hotter double base ones).  Propellant additives such as dinitrotoluene also reduce the general burning temperature.  Another approach is reducing the heat against the barrel wall, which involves some sort of insulating or protective boundary layer.  Chrome plating the barrel is one modern approach, but others involve materials or additives that put cooler gas against the wall.  The first method used was probably silk charge bags but other combustible charge containers, waxed paper and polyurethane foam liners have been used.  Swedish Wear Additive uses a synthetic textile coated with paraffin wax containing titanium or tungsten oxide and subsequently magnesium silicate (talc), it is used at the front of a charge container.

The MVs found as a result of calibration were adopted for use by the battery operating the guns.  Corrections to firing data range have to be made for their difference from standard MV, this is done either during calculations in a command post or by calibrating sights on the guns.  However, range is not the only data affected by MV.  Clearly MV also affects the time of flight, this means it affects the time setting on a fuze, ie the fuze that either bursts the shell in the air or causes the shell's cargo to be ejected.  Again the correction can be calculated in a command post or by a device at each gun, a fuze indicator.  

Since the introduction of battlefield computers MV corrections have been applied as part of ballistic calculations in the command post.  Before electronic computers the British preference was for calibrating sights and fuze indicators.  However, increased computing power on each gun means that a MV correction can now be applied at the gun using MV data measured from immediately previous rounds.  Nevertheless allowance has to be made for projectile weight and propellant temperature, which complicates the matter.

MV COMPONENTS AND INFLUENCES

In the beginning the factors influencing MV were not well understood.  These factors emerged during the 20th Century as data was collected.  It is the subject of internal ballistics.   Furthermore different MV measuring methods give different results.  In summary the factors, as they are understood today, are as follows:

MV variations are caused by:

During prediction corrections are only routinely made for the first two of the attributable variations.  However, charge temperature measurements in the field are not notably precise.  In addition charge  temperature varies during the diurnal cycle (cooler at night, hotter during the day, although it lags behind the change in air temperature) and there can be variations between individual charges depending on their storage arrangements.  Barrel temperature and depth of ram also vary in the field.  Charge weight and cartridge case dimensions like projectile weight and dimensions, are a matter of manufacturing tolerances.  Of course driving band diameter may affect depth of ram and the resistance from the rifling lands.  Cartridge batches are adjusted to standard energy during manufacture to compensate for variations in propellant production lots but there is still very small residual variation.

In addition when guns change from one charge to another they usually require one or two rounds before their MVs ‘settle’ around their mean MV, and cold guns may also take a round or two to ‘settle’.  There are also extremes – the 175-mm M107 is sometimes thought to have needed up to 40 rounds to ‘settle’!  These gun effects are variously called ‘charge order’ or ‘barrel memory’.  The causes of this and other unattributable variations and the physics of how they relate to each other are still not well understood.  Some of these effects are eliminated by using synthetic, instead of copper based, driving bands.

An added complication is that different types of propellant do not maintain their relative MV performance as a gun wears.  They are manufactured to give identical standard MVs.  However, actual MVs for the same charges but different types of propellant diverge with barrel wear.  This is caused by their different burning rates and hence speed of producing a particular gas volume, whose pressure varies with chamber and barrel volume as these wear.    Even the same propellant from different manufacturers can vary significantly, a problem that was found in WW2 with British and Canadian produced 25-pdr propellant.

Barrel wear can be measured in two ways:

Measured wear is more accurate and preferred.  Relative EFC values are approximations so over many rounds different quantities of different charges may add up to the same value but actual wear may differ.

Broadly, there are four methods of calibration:

In fall of shot the impact point of each shell is recorded, traditionally calculated by cross observation using accurate angular measurements from surveyed points.  Alternatively airburst could be used instead of groundburst and radar observation introduced another method.  Fall of shot results include jump and stability.  The latter deteriorates as a barrel wears.

Figure 1 - Fall of Shot Calibration

Fall of shot calibration

Instrumental calibration started in WW1 by firing the shell through two screens attached to recording apparatus using sound ranging technology, then moved to camera techniques and finally in the 1950's changed to Doppler radar.  Doppler radar provided accurate instrumental measurement of MV for the first time.  It also enabled further research.  However, instrumental calibration measures 'pure' MV, in contrast Fall of Shot captures other gun specific variables that are not actual MV.  These include individual gun variations from standard jump and droop, and the effects of barrel wear on the stability of the shell in flight which in turn affects range in addition to the actual MV effect on range.  

Wear measurements use tabulated data, derived from the calibration results using the other two methods, to give an MV loss (or gain) for amounts of measured wear.  It depends on the calibre, but a ‘quarter of life’ (of a barrel) is between perhaps  0.05 and 0.1 mm.  These are quite small numbers and the same sort of effect can result from badly machined driving bands.

UK experience was that using recorded EFCs gave very inaccurate results.  The problems were that the EFC records may be inaccurate, that the EFC values were approximations, and that local circumstances when firing also influenced the amount of wear.

WORLD WAR 1

Calibration by field artillery had been considered before the war and it had been adopted by the Navy and coast artillery.  However, for field artillery it was thought that there would be insufficient firing to cause significant barrel wear; this is interesting because the hot burning Cordite Mk 1 used in the Boer War had caused significant barrel wear.  Since all shooting was ‘observed fire’ it was also considered that any variations due to wear would be ‘shot out’ during ranging.  By mid 1915 calibration was recognised as an important issue for field artillery, because of its implications for shooting at targets close to own troops and the emergence of map shooting,  and instructions were issued.  However, in this period there was little or no understanding of the variations in gun performance that affected calibration results.

Initially calibration was fall of shot, Figure 1 above.  The range to each gun’s mean point of impact was compared to what it should have been after making allowance for the difference in meteorological conditions and charge temperature from standard.  It was also important to use standard weight shells because initially there was no data for shell weight variations, and to have guns and target area at the same height.  The residual difference in range between aimpoint and actual impact converted to a difference in MV, the ‘calibration error’, and used by the gun as ‘Gun Corrections’ – distance differences to range applied by each gun No 1 (detachment commander).  

An important point to note is that accurate fall of shot calibration requires accurate data about the effects of non-standard conditions at the time of calibration.  In other words if you didn't accurately know how different the wind, air temperature and air density was from standard conditions then the calibration derived correction to MV was inaccurate.  Anything approaching good quality data for 'conditions of the moment' was not available until about the middle of 1917, and that's probably a generous interpretation of 'good'.  Nevertheless poor data produced MVs that enabled the guns of a battery to shoot together, but they did not shoot accurately to the map.

Use of calibration started in 1915, in August GHQ issued Notes on Close Shooting by Guns and Howitzers, registration of targets and calibration.  Instructions required stable meteor conditions, shooting at a target accurately located on the map with airborne observation available.  Batteries measured surface meteor data and were advise to ask the RFC for an estimate of wind velocity.  Calculations were to be done using special calibration slides rules - the battery rule and the gun rule that would be provided 'when available'!  The battery rule corrected the fired range for barometer, air temperature and wind; the gun rule corrected it for charge temperature and gave the actual MV.  It's unclear how widely these rules were issued, however they could be fairly easily constructed using paper or card and data in Range Tables.  The instruction also required that each gun being calibrated fired a 5 round series at the target.

Calibration became routine in 1917 and by autumn that year 3rd and 4th Armies in France had permanent calibration ranges in their rear areas.  Previously calibration firing had been undertaken into enemy held ground.  Furthermore guns with similar MVs were being grouped in the same battery, at least in field artillery.  The best method of calibration observation at the front was cross-observation by 3 posts of the Observation Section in each army’s Field Survey Company.  However, ranging a datum point by a ground or air observer and comparing the initial firing data with the ranged firing data gave useful results.  

Early attempts at instrumental calibration were not very successful but in 1917 sound ranging recording apparatus was connected to wire screens to measure the time taken for a shell to pass between the 2 screens.  This gave good results.  By early the following year all armies and some corps had facilities set-up for instrumental calibration.  These ranges could accommodate several guns on their firing point, the guns fired at low elevation into a hillside aiming their shells to pass though the wire mesh screens.  This enabled the shells' time between screens to be measured.  

However, the Ordnance Committee in London argued that an instrumental MV did not correlate with shell performance as measured by fall of shot MV (because guns were worn and instrumental measurement did not reflect shell steadiness in flight, which decreased with barrel wear and increased air resistance and so decreased range).  The Ordnance Committee was, of course, correct but the issue was to rumble on for four decades.  Nevertheless trials in France showed the two types of MV correlated quite well for howitzers although 60-pdr guns were not so good, possibly because jump and droop were inadequately considered in calculations.  The BEF’s MGRA advised the use of instrumental calibration.

Regular calibration of all guns in a battery, particularly if they were howitzers and had several charges, was both time consuming and involved substantial ammunition expenditure.  It was usual to fire as many as 20 rounds from each gun to calibrate a single charge, propellant type and shell combination.  There could be more than one propellant type, different shell models, some with different crh and some with different driving bands, which also affected the shells ballistic properties.  These combinations and the consequential number of rounds needed for calibration firings were probably acceptable in the second half of WW1 when ammunition production was in full swing.  In peacetime they cause bean-counters to blanch!  Comparative calibration was therefore introduced.  This involved having a single calibrated gun and comparing the others to it.  The basis for comparison was physical measurement of barrel wear.

Of course calibration introduced a new problem, how to allow for different MVs among the guns of a battery.  It’s important to note that on British guns range was set as a distance on the range indicator (or drum, and graduated in 25 yard intervals), not as an elevation angle, unless the field clinometer was in use.  The official solution was to calculate gun corrections (distances) and corrector settings that each gun applied to their range drum and fuze indicator.  Calculating these for every gun was bothersome, which encouraged grouping guns in batteries according to the MVs.  An alternative was using a single average correction if the MV spread wasn’t too wide in a battery.  In some cases the guns’ range indicators were adjusted, however this caused errors at ranges different to that used in calibration, particularly for fuze settings.  

BETWEEN THE WARS

Calibration procedures evolved after WW1.  In 1928 calibration methods were defined as absolute or comparative.  The former was ‘proper’ calibration by fall of shot, screens or photographic method, although the latter was ‘under development’, with careful selection of ammunition and firing series of 5 rounds for observation.  The purpose of comparative calibration was to ensure that the guns shot alike.  Guns fired a series of rounds at the same elevation, their fall of shot mean point of impact was compared to a designated ‘standard’ gun (ideally absolutely calibrated) and the differences converted to a corrected MV.

In this period the way in which calibration was used also changed.  Calibrating sights (Probert pattern) were introduced in the 1930's instead of the Numbers 1 applying it as a gun correction.  In these a gun’s actual MVs were incorporated mechanically in the sights (and changed when necessary) so that when the range was set on the sights it was automatically offset for MV difference from standard.  The same type of arrangement had been introduced in WW1 with fuze indicators, which each gun had to determine the fuze length to be set on a fuze.

Artillery Training Volume 2 Field Gunnery 1934 (AT Vol 2) was amended in 1936 to introduce a new calibration system that recognised four methods for absolute calibration.  The two existing methods fall of shot, instrumental at the muzzle, (electrical screens or photographic) were supplemented by wear measurement and EFC.  It also recognised that there were ‘day to day’ variations that were considerably greater than ‘round to round’ variations but that their causes were unknown.  A peacetime calibration policy was also established, the minimum requirement being a calibrated ‘standard gun’ in each battery (money and ammunition were scarce).

Other changes included firing different propellant lots in different calibration series on different days and meaning the results;

Calibration by wear involved measuring the bore at 1 inch from the commencement of rifling then using data in the range tables to find the equivalent MV loss.  Supplementing this were graphs for each gun plotting fired calibration MVs against wear at calibration and allowing extrapolation against new wear measurements.  The EFC method was in principle similar to wear measurement although it wasn’t as accurate.  Of course one problem was obtaining sufficient and reliable data that related MV measurements to wear and EFC data.

There were also methods for applying fired calibration results to charges that were not calibrated.  Finally official Army Forms were introduced, a Record of Calibration (B 2565 Guns and 2566 Howitzers) and a Record of Comparative Calibration (B  2567 Guns and 2568 Howitzers) that were kept (with the wear and EFC graphs) with each gun’s Memorandum of Examination.  Subsequently amendment 4 (Apr 1937) to AT Vol 2 provided revised rules for discarding a round from a calibration series.

WORLD WAR 2

At the outbreak of war a Calibration Troop of two sections had been mobilised.  Sections were equipped with a Muzzle Velocity Camera as well as being able to undertake fall of shot calibration.  Their tradesmen were Surveyors RA and the troop went to France in 1939.   A total of five calibration troops were formed during the war and provided calibration in all theatres.  This usually necessitated creation of an artillery range and survey of the calibration firing positions and cross observation posts.  The latter were usually observation sections or flash spotting posts from a corps survey regiment.

In WW2 calibration issues were not raised until Royal Artillery Training Memoranda (RATM) No 6 issued in September 1942.  It advised that no wear tables existed for some guns (eg 4.5, 5.5 and 7.2-inch because they were still new and data had not been accumulated).  Also that the graph of log MV against log charge weight was only applicable if different charges had the same propellant.  It also suggested a simple method for reasonably accurate results: calibrate all guns at one charge and the battery standard gun at all charges.  Then assume that MV differences were proportional and calculate other guns’ MVs accordingly.

In 1943 a new pamphlet was published, AT Vol 3, Pam 7 Calibration.  This continued the use of absolute and comparative calibration, and of updating MVs by wear measurement or EFCs.  It also explained that range was affected by MV,  jump and shell steadiness, and that direct fire guns, ie anti-tank guns, which were short range, were more affected by jump than MV and therefore they were zeroed and not calibrated.

It also provided wear tables for 25-pdr and 5.5-inch and explained that these should be used before any calibration to provide a MV for a newly manufactured gun and between calibrations to update the adopted MVs.  Adopted MVs were to be those found by fall of shot calibration, and if instrumental calibration was used then MVs were to converted to fall of shot MVs using Range Table data.

Pam 7 also introduced a wartime calibration policy.  This differentiated between regiments in the field and those that were not, and relaxed the rules for the former.  The main points of the policy were:

In the field the policy was to be applied as follows:

The policy also stated that each gun’s MVs should always be known to within 5 ft/sec, that the regimental commander was responsible for the standard guns’ MVs, and battery commanders for the other guns.

In addition Pam 7 introduced a new approach to calculating adopted MVs from calibration firings.  This involved combining the results of calibration firing and the results of wear measurements applied to the previous calibration.  For absolute calibration the wear updated MV was added to 4 × fall of shot (newly measured) and divided the total by 5.  For comparative calibration the fall of shot MV was to be added to the previously calibrated MV (fully updated for wear) and the result divided by 2.  However, it also required that an absolutely calibrated gun only fired at two charges.  The results of these were used to derive an ‘adopted wear’, which was then used with wear tables to calculate adopted MVs for the uncalibrated charges and provided the baseline for MV updating from measured wear between calibration firings.

Finally, Pam 7 required regiments to report their calibration results to the School of Artillery, presumably to provide data for MV and wear/EFC tables.

In mid-1944 another change in technical policy was announced in RA Notes Para 909.  The correction for droop (in Range Tables) was abolished because it was difficult to measure in the field and guns didn’t shoot in accordance with mean droop.  Its effects were taken up in fall of shot calibration.  

RATM No 13 published in December 1944 introduced two changes as a result of regiments reporting their calibration results.  These were the result of comprehensive trials under the auspices of the Committee on the Accuracy of Artillery Fire.  The fall of shot calculation method was unchanged but it made changes to calculation of adopted MV.  The reasons for the changes were:

The revised procedure stated that measured wear was not to be included in calculations of adopted MV from calibration firing (both absolute and comparative).  Standard guns would continue to be calibrated for each important charge (in 25-pdr and 5.5-inch this was charges 2, 3, 4 (not 25-pdr) and Super) and adopted MV would be the mean of the series fired with a charge.  

Absolute calibration ideally involved 4 groups of 5 rounds from a standard gun at selected charges (basically all less 1), fired in a minimum of 2 series.  Comparative calibration required all guns, including the standard, to fire 1 series at each charge.  If only 1 charge was calibrated then the proportional method could be used for other charges, providing all guns had MVs within 30 ft/sec of standard gun.  

However, calibration results were to be used to deduce an adopted wear for each charge, which was the basis for updating MVs from physical wear measurements.  The mean of these adopted wears was to be used to give the adopted wear for uncalibrated charges and hence enable derivation of an MV.

RATM No 13 also announced the intention to provide army or army group standard guns, absolutely calibrated at the School of Artillery, for comparative calibration of regiments’ guns.  These standard guns were provided by Calibration Troops.  In 1943 Middle East Forces had suggested that standard guns in their 1st quarter of life were held in each theatre and issued to regiments whenever calibrating.  The purpose of this was to eliminate the calibration overhead of regiments having their own standard guns.  It also noted that it was still not possible to provide factors to relate instrumental MV to Fall of Shot.  The relationship could differ, particularly for ovality at the muzzle (which affected shell stability) and possibly for muzzle brakes.  Although RATM 13 did not mention it, it had been found that there was a significant difference in jump between 25-pdr with and without muzzle brakes.

AFTER WORLD WAR 2

A report by No 2 Operational Research Section in NW Europe in late 1944 highlighted that errors in muzzle velocity were the main cause of inaccurate predicted fire.  This was supported by further analysis in 1945 and trials at Larkhill in 1947.   AT Vol 3 Pam 10 Calibration 1947 replaced the previous publications and stated that calibration was to be regarded as a continual process.  The performance of individual guns was to be recorded.  It had been found that a simple relationship between the MV of a gun calibrated at one charge and the MVs expected from the same gun at other charges with same propellant nature didn’t always exist.  It stated calibration methods as:

Calibration policy was for regimental commanders to ensure periodic ‘full absolute and comparative calibration’.  Keeping MVs up to date between full calibrations was by a combination of periodic checking by firing, bore measurement and adjustment on basis of EFCs fired.  Full absolute and comparative calibration of a regiment’s guns was to be carried out when:

The MVs for charges and propellant types not calibrated was to be assessed by comparison with other charges.  Meteor conditions were to be stable and close to standard, angle of sight small, range for elevation 15-20°, charge Super range to be the mean for which the charge was likely to used.  Shells were to be standard weight, mixed lots of propellant were to be used but each gun was to fire the same mix.  Cross-observation or radar was used to plot the fall of shot.  

An instrumental calibration instrument was held by calibration troops.  Instrumental MVs were to be converted to fall of shot MVs.  Range Tables had a table giving at wear measurements the difference between instrumental and wear MVs, this difference was to be applied, preferably using the gun’s Life History Graph.  Records of Calibration were to be maintained using B 2566.

For charges not fired, wear tables were to be used, and assume different MVs of adjacent charges proportional to calibrated charge’s MV.  But proportion factors couldn’t be used if wear tables were unavailable.  Calibration was to be periodically checked between full calibrations by cross-observation of ground burst, airburst ranging or adjusting mean point of impact onto target.  It could be triggered if a gun was shooting differently to the standard gun.  Wear was always to be measured.  The Life History Graph was to be used to plot MV relative to the wear line.  Comparative calibration was to be against the standard gun.  

During the early 1950's calibration trials were undertaken.  They found that existing instruments were unreliable but Doppler radars were very reliable, accurate and their results could be correlated with fall of shot MVs after adjustment for wear.  These trials in 1955 used a Naval Type 900 Doppler radar.  This led to  device 'Equipment, Radar, Gun Calibration' (designated Radar, Gun Control, No 1, Mk 1) and equipped calibration troops.  This suggests that the earlier problem had been the variability of instrumental MV whereas properly conducted Fall of Shot calibration was reasonably consistent.   However, the radar measurements also highlighted the significance of occasion to occasion variations.  

A new edition of Pam 10 Calibration appeared in 1957 and again 1961.  The 1961 edition emphasised that calibration was complicated by random variations, the relationship between instrumental and fall of shot MVs and the relationship between MV and wear.  The random variations comprised:

The difference between instrumental and fall of shot MV was greater than the day to day variation.  Range tables gave a theoretical relationship between MV and wear but it cautioned that this data should be treated with care.

Three types of calibration were used:

The normal method of calibration was fall of shot but instrumental calibration measurements were to be taken during absolute calibration and all series were to be preceded by two ‘warmers’.  Record of Calibration (B 2566) and Adopted MV Graph (B 6549) were maintained for every barrel as part of the Gun History Book (AB 402), which had replaced the Memorandum of Examination and MVs could be brought up to date for wear using the graph and range table data.  The main problem was the amount of ammunition required to absolutely calibrate every gun.

During this period calibration was being simplified.  The variety of propellant types was reducing so the issues of transferring the results of calibration with one type of propellant to another were disappearing.  When the last of the WW2 guns left service there was only one type of propellant for most charges with most guns.  The problem was reduced to the US ammunition, where there were both green and white bags for some 155-mm and 203-mm charges.  The latter was exacerbated by two different sets of white bag charges, one of which was virtually never calibrated due to the shells involved.

During the late1950's a new Doppler radar suitable for field use was developed.  Comparitive trials showed it to be an improvement on Radar, GC, No 1, Mk1 and it entered service in the mid 1960's as the Electronic Velocity Analyser (EVA).  EVAs were held as theatre pool equipment in peacetime.  It was quite a large device and still needed a specialist operator.  It was placed beside a gun and ‘aimed’ a short distance in front of its muzzle, it had a remoted sensor to detect the shell leaving the barrel and switch on its recording equipment, the analysers, in the EVA vehicle.  The analysers produced a paper trace from which raw data could be measured.  This data was used to calculate MVs, either manually or using FACE.  Originally it may have been thought that EVA could be used during routine firing.  However, experience quickly proved that dedicated calibration firing was still necessary because EVA was incompatible with normal training.  

Electronics continued to shrink and in the 1970s various other MV radars were acquired, mainly for use in trials, but also used for calibration of guns in units.  They included radars that could be mounted on a gun, notably the Lear Ziegler PDR.

Figure 2 - Electronic Velocity Analyser (EVA)
Click the picture to see it enlarged

Electronic Velocity Analyser (EVA)

In 1972 AT Vol 2, Pam 14, Part 4 Calibration replaced the 1961 pamphlet and followed the new calibration policy adopted in 1968.  It again explained that random variations in MV were round to round, lot to lot, day to day and gun to gun, but that instrumental measurements reduced the day to day variation compared to fall of shot.  In again referred to the differences between instrumental and fall of shot MV, with the differences given in Firing Tables (which were replacing Range Tables) as metres per second for wear measurements.  However, it also advised that ‘current data’ were estimates, the problem being that none of the modern guns had much wear in UK service.  Peacetime armies cannot afford to wear out a few barrels on new guns to capture MV life data.  Of course fall of shot calibration incorporates the gun to gun differences in tangent elevation caused by the variation in droop and jump, whereas instrumental calibration measures 'pure' MV.

The key feature of the new policy was an 'Adopted MV' for each charge which could be updated with wear data.   EVA was used for either full calibration or individual MV checks.  The latter were authorised annually or every 1/20th of barrel life with a small allocation of ammunition.  Full calibration was only undertaken when a gun first entered service, when a new ammunition system was introduced or when individual checks didn’t give acceptable results.  Mixed lots were always used and full calibration used series of a warmer and 7 rounds to count.  Nevertheless, if a round’s MV differed from that of the one before it by more than an amount detailed in Firing Tables it had to be excluded from MV calculations.  

Full calibration also used ‘tie-in’ guns from other batteries calibrated on different days.  Within a regiment each battery calibrated on a different day, together with one gun from each of the other batteries.  The guns that were calibrated on several days were then used to produce corrections to the other guns, in other words it meaned out at least part of the day to day variations.  The purpose of this was to ensure that a regiment's guns shot together.  Individual MV checks used a 4 round series to produce an MV.

The calibration process was to measure MV for a charge then correct it for non-standard charge temperature and shell weight to give the 'Instrumental MV' (IMV).  It was then corrected for wear and compared with the 'Firing Table Zero Wear MV' (FTZWMV), the MV expected from a new gun.  The difference was the occasion to occasion correction and was applied to the IMV to give 'Adjusted IMV' (AIMV).  There were two methods, Meth A required precise placement of the EVA antenna, and used manual film reading and calculation.  Method B gave more latitude to antenna placement, still used manual film reading but used the FACE calibration program (on a separate cassette) for calculation and calculations included the 'warmer' rounds.  If the two extreme MVs were more that 8 PEMV apart then the most extreme was rejected, until the two 'outer' ones were less than 8 PEMV apart.  The resulting AIMV was meaned with previous AIMVs using weighting factors to give the new 'Adopted MV'.   However, the results could not be adopted until they had been checked by the School of Artillery.  This meant that data (including wear measurements) was centrally recorded.  These results were also used to update FTZWMV and MV wear relationships if necessary.

However, a new edition of Pam 14, Part 4 appeared in 1979.  This modified the procedure used 'charge grouping'.    Analysis of instrumental calibration results had revealed that charges could be grouped in two or three groups for each type of gun and ammunition 'family, charge super was one of the groups.  Within the charge group, the difference between each gun and the battery mean provided an approximation that could be applied to all charges in the group for that gun.  This significantly reduced the amount of calibration firing needed.  The procedures then used 'deduction of IMVs for charges fired', 'deduction of IMVs for charges not fired' , and 'deduction of adopted MVs from adjusted IMVs for charges fired', this took account of previously used weightings.   When calibration had not taken place then there was a procedure for 'updating previous entry of adjusted IMVs to current wear'.   MVs also had to be adjusted when a new FTZWMV was issued.

These 1970s methods undoubtedly improved the quality of the MVs being used.  but they really only provided a ‘snap-shot’ of MV and may or may not have provided representative mean MVs for a particular gun on another occasion.  Furthermore calibration involved considerable effort and ammunition expenditure that had little training value.  In 1972 a report recommended that continuously recording MVs of all guns would deal with the problems of occasion to occasion and propellant lot to lot variations.  This approach was enabled by the new small size MV radars that could be permanently mounted on a  gun.

Needless to say more studies and trials ensued.  The outcome of these, in 1983, was to reject the US notion of an MV radar in every battery and instead fit a radar to every gun.  This recommendation became an endorsed requirement.  The radar could produce a running mean MV if charge temperature and projectile weight was also recorded, although this proved to be easier said than done in practice.  The business case was cut and dried, the cost of the radars was less than the cost of ammunition for calibration firings.  The Muzzle Velocity Measuring Device (MVMD) was fitted to every gun starting in the early 1990's.  

By this time research had also produced two methods of MV prediction, one using neural networks and the other Kaman filters.  These approximately halved the difference between the expected MV of the next round and the actual MV.  However, BATES technology was incapable of supporting ballistic computations on the gun, and even data traffic between the guns and their CP between every round was not a realistic proposition.   

However, fitting a MV radar to each gun also meant that a lot of MV data could be collected if there was a way of easily and accurately recording it.  Of course it had to be corrected for non-standard charge temperature and projectile weight (which the MVMD did, using data entered by the gun detachment), which also meant issuing every gun with a charge temperature thermometer.   The potentially large amount of data also meant that errors in collected data, gun to gun and occasion to occasion effects would be 'meaned out' and provide reliable data about the relationship between wear and MV.  This in turn would allow a wear measurement of a gun to determine its 'true' MV at any time.  

However, there were some snags to this method.  The first is data collection, peacetime firing in Europe is mostly at low charges due to the small size of firing ranges, this means the data for high charges was fairly sparse.  Of course since the beginning of the 21st Century a reasonable number of rounds has been fired on operations using a wider variety of charges and the data collected.  Next is that adopting a new MV means making an accurate measurement of barrel wear, if this is inaccurately done due to human error or an inaccurate instrument then the wrong MV will be adopted.  Finally, droop and jump (both barrel and carriage), which vary with charge being fired, are treated as having standard mean values in calculations.  However, if an individual gun has droop or jump that is significantly different to the mean, its fall of shot may be noticeably different from others.  During WW2 a significant difference in jump was found  between 25-pdr with and without a muzzlebrake, its unclear if any difference was found with 25-pdr using and not using its platform.  This problem is resolved with fall of shot calibration but not with instrumental, and as previously mentioned may have been an issue with 60-pdr in WW1.

None the less MV data accumulated, and it was good quality data.  It confirmed that MV and wear were directly related.  This enabled an Adopted MV to be provided for any measured wear.  MVMD was still available if shooting suggested an MV was wrong.  This led to a further simplification in the new fire control computers delivered in the 21st Century.  The wear MV relationship data was available in the computers, and instead of entering an MV for every charge for each gun only the wear measurement needed to be entered.  It had taken 90 years to reach this simplicity.

Directory

Top

 Copyright © 2006 - 2014 Nigel F Evans. All Rights Reserved.