The story

Roughly, what percentage of arrows can be reused after a battle?

Roughly, what percentage of arrows can be reused after a battle?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The English Longbowman according to some sources can shoot at/above 10 arrows per minute. Some of the medieval battles would last for many hours. Back-of-the-envelope math says the number of shots fired by an archer is very large indeed. Clearly some arrows would have to be reused a (few?) time(s?) in order to maximize firepower in battle while minimizing the need for extra baggage train in the campaign. However, if I'm shooting arrows at guys wearing armor, then some of the ordinance will get damaged after use and not be reusable in "tomorrow's" battle.

So, on average what percentage of arrows shot in a medieval battle would be reusable in future battle(s)?

I decided to make an answer since I pointed out a lot of the issue, I deleted my comments, and I state now I don't have the historical evidence, but have a good view on practical use of arrow and bows.

The arrow rarely breaks in the middle, most of the cases if it hits solid material, it breaks very close to or at the head. Both arrow's body and head is recoverable if it found.

I agree with Felix Goldberg, the archers most probably didn't reuse the arrows in a single - unbroken - battle, since they had to keep formation and received orders. They might pick up the arrows only if it is there, pinned into the dirt and healthy (like they were targeted by enemy archers), but this is unlikely too, archers typically used against footmen and cavalry.

Quote: "However, if I'm shooting arrows at guys wearing armor, then some of the ordinance will get damaged after use and not be reusable in "tomorrow's" battle."
This is wrong, they can be repaired very easily. For me in a workshop it takes 5 minutes, with a proper tools and practice in the medieval age it would take similar length of time, it needs to glue together, which happens overnight. And I can tell, it is way easier and quicker just to repair your arrows than making new ones. A typical arrow is reusable 5-6 times until it gets shorter. If it gets too short: you need to make new body. If it is not sharp enough, just sharpen it. Sharpening with proper tools happens in minutes.

I also want to point out, that there are no really huge collection of arrow heads in archeology, so it seems they were reused.

My point would need confirmation from a person who really researches the battle histories. But I would assume the recycle of arrows were lot closer to 100% than 0% for the winner side.

The comment is made elsewhere that archers didn't collect arrows during a battle:

I agree with Felix Goldberg, the archers most probably didn't reuse the arrows in a single - unbroken - battle, since they had to keep formation and received orders. They might pick up the arrows only if it is there, pinned into the dirt and healthy (like they were targeted by enemy archers), but this is unlikely too, archers typically used against footmen and cavalry.

However, this source for the Battle of Crecy explicitly states (my emphasis)

Each successive charge was weaker and during brief pauses in the battle, the English archers stood in their lines with remarkable discipline, only going down the slope far enough to collect their arrows.

I have seen this and similar comments made elsewhere, though this is the only source I can locate just now. It is important to remember that each of the reported charges by the French knights only lasted a few minutes, say 5 or 6 at the outside, as any charge lasting longer has lost it's most important advantages, speed and momentum.

Each of these charges would have required a much longer time period, perhaps 20 or 30 minutes, for the participants to rally, form up in units, and move to their respective start zones. During these periods there was ample time for designated individuals to run forward and collect arrows.

A quick look at the attached map of the battlefield for Crecy makes clear that the launch zone for the French knights was located 3 or 4 times effective arrow range from the English lines, so the question of safety for archers running forward really doesn't exist.

My understanding is also that the archers and runners running forward to collect arrows had knives and daggers which could also be used to kill any wounded enemies who attempted to resist such endeavours.

So while a specific value for the percentage of arrows that can be reused during a battle is unavailable, some simple calculations regarding maximum troop coverage of ground during a charge (at the trot/canter and then gallop, coupled with the inherent inaccuracy of bows used essentially as artillery, suggests that perhaps 90% of arrows fired fell harmlessly to the ground (or deflected with minimal damage from armour)and that most of these could be reused as soon as collected.


Note also that the arrowheads of broken arrows are themselves valuable, even if not immediately reusable. I know of no direct claim or evidence that spare arrow shafts and fletching were carried in addition to the arrow supply; but not doing so would seem gross incompetence for an army reliant on its archers.

I seriously doubt anyone ever kept records of that. In the heat of battle you're too busy to make notes, and afterwards it just doesn't matter unless you started doing it for the sole reason of there being none of your own arrows left and you wrote that in your memoirs, which would be unlikely for the above mentioned reason to have any detailed numbers, only mention of the fact.
It's certainly not inconceivable that it would happen, especially during sieges.
But do keep in mind that it would require reasonably similar bows between the different armies.
If the English were using longbows requiring 1 meter long arrows and the French were using crossbows using 30cm long bolts for example, there'd be no way to reuse those (though a Frenchman might in an emergency be able to break a recovered English arrow in pieces and shoot off those, without fletching it would be highly inaccurate).
THAT makes it unlikely in the case of especially the mentioned English longbow which was a pretty specific weapon not used AFAIK by anyone else.

Battle of Cannae

Our editors will review what you’ve submitted and determine whether to revise the article.

Battle of Cannae, (August 216 bce ), battle fought near the ancient village of Cannae, in southern Apulia (modern Puglia), southeastern Italy, between the forces of Rome and Carthage during the Second Punic War. The Romans were crushed by the African, Gallic, and Celtiberian troops of Hannibal, with recorded Roman losses ranging from 55,000 (according to Roman historian Livy) to 70,000 (according to Greek historian Polybius). One of the most significant battles in history, it is regarded by military historians as a classic example of a victorious double envelopment.

Hannibal was the first to arrive at the battle site, with a force of about 40,000 infantry and 10,000 cavalry. His army took command of the Aufidus (now Ofanto) River, the main source of water in the area. That added to the strain on the Romans, who would struggle to satisfy the thirst of their greater number of soldiers in the early August heat. Hannibal positioned his lines facing north, compelling the Romans to face mostly to the south, where the hot libeccio wind blew dust and grit into their eyes, an irritant and disadvantage that, according to ancient authorities, cannot be ignored. In addition, Hannibal confined the eight Roman legions in a narrow valley, hemmed in by the river. In one stroke, Hannibal thus restricted the mobility of the Roman cavalry and forced the Roman infantry to adopt a formation that was deeper than it was wide, two factors that would prove critical in the outcome of the battle.

Breaking from the Fabian strategy of nonengagement, the Roman consuls Lucius Aemilius Paullus and Gaius Terentius Varro brought to Cannae roughly 80,000 men, about half of whom lacked significant battle experience. They sought to meet Hannibal, who had just taken a highly coveted grain depot at Canusium, in the hope of delivering a knockout blow and ending the destructive Carthaginian invasion of Italy. Terentius Varro had been popularly elected as a plebeian consular political appointee, and ancient sources describe his character as overconfident and rash, ascribing to him the hope that he could overwhelm Hannibal with sheer numbers. Aemilius Paullus, however, was both a veteran and patrician from an established military family, and he was justifiably cautious about facing Hannibal on his enemy’s terms.

The Romans faced southwest, with their right wing resting on the Aufidus and with the sea about three miles (five kilometres) to their rear. They placed their cavalry (about 6,000) on their wings and massed their infantry in an exceptionally deep and narrow formation in the centre in the hope of breaking the enemy centre by weight and push. To counter that, Hannibal relied on the elasticity of his formation. He stationed his Gallic and Spanish infantry in the centre, two groups of his African troops on their flanks, and the cavalry on the wings. But before engaging the enemy, his line adopted a crescent shape, the centre advancing with the African troops on their flanks en échelon. As Hannibal had anticipated, his cavalry won the struggle on the wings, and some then swept around behind the enemy.

Meanwhile, the Roman infantry gradually forced back Hannibal’s centre, and victory or defeat turned upon whether the latter held. It did: although it fell back, it did not break, and the Roman centre was gradually drawn forward into a trap. Hannibal’s crescent became a circle, with Hannibal’s African and Spanish troops on the wings pressing inward on the Romans and the Carthaginian cavalry attacking from the rear. Some of the equipment used by troops engaging the Roman flanks—especially shields and other armour—had been taken from dead Romans after the Carthaginian victory at Trasimene. That may have further confused the Romans, who were already fighting through a steady torrent of dust. Pressed tightly together and hence unable to properly use their arms, the Romans were surrounded and cut to pieces. It is possible that the falcata, a brutally effective curved short sword employed by Celtiberian troops, played some role in the dismemberment of the Roman ranks.

Terentius Varro fled the field of battle with the remnants of the Roman and allied cavalry. Aemilius Paullus was killed along with many other high-ranking commanders, including Gnaeus Servilius Geminus, Marcus Minucius Rufus, and other veteran patricians. Among the Roman dead were 28 of 40 tribunes, up to 80 Romans of Senatorial or high magistrate rank, and at least 200 knights (Romans of equestrian rank). It was estimated that 20 percent of Roman fighting men between the ages of 18 and 50 died at Cannae. Only 14,000 Roman soldiers escaped, and 10,000 more were captured the rest were killed. The Carthaginians lost about 6,000 men.

When word of the defeat reached Rome, panic gripped the city, and women flocked to temples to weep for their lost husbands, sons, and brothers. Hannibal was exhorted to march on Rome by Maharbal, one of his commanders, but Hannibal did not do so. Livy reports that Maharbal then told Hannibal that he knew how to win battles but not how to take advantage of them. For his part, Hannibal had hoped that many South Italians would desert Rome and ally with him after his crushing victory. In spite of the massive blow to Rome’s morale and its manpower in the short term, Cannae ultimately steeled Roman resistance for the long fight ahead. Rome resumed the Fabian strategy, denying Hannibal the opportunity to achieve a second victory of Cannae’s scale, and Hannibal saw the strength of his armies and his allies whittled away through slow attrition.


Chemical warfare is different from the use of conventional weapons or nuclear weapons because the destructive effects of chemical weapons are not primarily due to any explosive force. The offensive use of living organisms (such as anthrax) is considered biological warfare rather than chemical warfare however, the use of nonliving toxic products produced by living organisms (e.g. toxins such as botulinum toxin, ricin, and saxitoxin) is considered chemical warfare under the provisions of the Chemical Weapons Convention (CWC). Under this convention, any toxic chemical, regardless of its origin, is considered a chemical weapon unless it is used for purposes that are not prohibited (an important legal definition known as the General Purpose Criterion). [2]

About 70 different chemicals have been used or stockpiled as chemical warfare agents during the 20th century. The entire class known as Lethal Unitary Chemical Agents and Munitions have been scheduled for elimination by the CWC. [3]

Under the convention, chemicals that are toxic enough to be used as chemical weapons, or that may be used to manufacture such chemicals, are divided into three groups according to their purpose and treatment:

    – Have few, if any, legitimate uses. These may only be produced or used for research, medical, pharmaceutical or protective purposes (i.e. testing of chemical weapons sensors and protective clothing). Examples include nerve agents, ricin, lewisite and mustard gas. Any production over 100 g must be reported to the OPCW and a country can have a stockpile of no more than one tonne of these chemicals. [citation needed] – Have no large-scale industrial uses, but may have legitimate small-scale uses. Examples include dimethyl methylphosphonate, a precursor to sarin also used as a flame retardant, and thiodiglycol, a precursor chemical used in the manufacture of mustard gas but also widely used as a solvent in inks. – Have legitimate large-scale industrial uses. Examples include phosgene and chloropicrin. Both have been used as chemical weapons but phosgene is an important precursor in the manufacture of plastics and chloropicrin is used as a fumigant. The OPCW must be notified of, and may inspect, any plant producing more than 30 tons per year.

Simple chemical weapons were used sporadically throughout antiquity and into the Industrial age. [4] It was not until the 19th century that the modern conception of chemical warfare emerged, as various scientists and nations proposed the use of asphyxiating or poisonous gasses.

So alarmed were nations and scientists, that multiple international treaties were passed – banning chemical weapons. This however did not prevent the extensive use of chemical weapons in World War I. The development of chlorine gas, among others, was used by both sides to try to break the stalemate of trench warfare. Though largely ineffective over the long run, it decidedly changed the nature of the war. In many cases the gasses used did not kill, but instead horribly maimed, injured, or disfigured casualties. Some 1.3 million gas casualties were recorded, which may have included up to 260,000 civilian casualties. [5] [6] [7]

The interwar years saw occasional use of chemical weapons, mainly to put down rebellions. [8] In Nazi Germany, much research went into developing new chemical weapons, such as potent nerve agents. [9] However, chemical weapons saw little battlefield use in World War II. Both sides were prepared to use such weapons, but the Allied powers never did, and the Axis used them only very sparingly. The reason for the lack of use by the Nazis, despite the considerable efforts that had gone into developing new varieties, might have been a lack of technical ability or fears that the Allies would retaliate with their own chemical weapons. Those fears were not unfounded: the Allies made comprehensive plans for defensive and retaliatory use of chemical weapons, and stockpiled large quantities. [10] [11] Japanese forces used them more widely, though only against their Asian enemies, as they also feared that using it on Western powers would result in retaliation. Chemical weapons were frequently used against Kuomintang and Chinese communist troops. [12] However, the Nazis did extensively use poison gas against civilians in The Holocaust. Vast quantities of Zyklon B gas and carbon monoxide were used in the gas chambers of Nazi extermination camps, resulting in the overwhelming majority of some three million deaths. This remains the deadliest use of poison gas in history. [13] [14] [15] [16]

The post-war era has seen limited, though devastating, use of chemical weapons. During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly 20,000,000 U.S. gallons (76,000 m 3 ) of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. [17] Some 100,000 Iranian troops were casualties of Iraqi chemical weapons during the Iran–Iraq War. [18] [19] [20] Iraq used mustard gas and nerve agents against its own civilians in the 1988 Halabja chemical attack. [21] The Cuban intervention in Angola saw limited use of organophosphates. [22] The Syrian government has used sarin, chlorine, and mustard gas in the Syrian civil war – generally against civilians. [23] [24] Terrorist groups have also used chemical weapons, notably in the Tokyo subway sarin attack and the Matsumoto incident. [25] [26] See also chemical terrorism.

Chemical warfare technology timeline
Year Agents Dissemination Protection Detection
1914 Chlorine
Sulfur mustard
Wind dispersal Gas masks, urine-soaked gauze Smell
1918 Lewisite Chemical shells Gas mask
Rosin oil clothing
smell of geraniums
1920s Projectiles w/ central bursters CC-2 clothing
1930s G-series nerve agents Aircraft bombs Blister agent detectors
Color change paper
1940s Missile warheads
Spray tanks
Protective ointment (mustard)
Collective protection
Gas mask w/ whetlerite
1960s V-series nerve agents Aerodynamic Gas mask w/ water supply Nerve gas alarm
1980s Binary munitions Improved gas masks
(protection, fit, comfort)
Laser detection
1990s Novichok nerve agents

Although crude chemical warfare has been employed in many parts of the world for thousands of years, [27] "modern" chemical warfare began during World War I – see Chemical weapons in World War I.

Initially, only well-known commercially available chemicals and their variants were used. These included chlorine and phosgene gas. The methods used to disperse these agents during battle were relatively unrefined and inefficient. Even so, casualties could be heavy, due to the mainly static troop positions which were characteristic features of trench warfare.

Germany, the first side to employ chemical warfare on the battlefield, [28] simply opened canisters of chlorine upwind of the opposing side and let the prevailing winds do the dissemination. Soon after, the French modified artillery munitions to contain phosgene – a much more effective method that became the principal means of delivery. [29]

Since the development of modern chemical warfare in World War I, nations have pursued research and development on chemical weapons that falls into four major categories: new and more deadly agents more efficient methods of delivering agents to the target (dissemination) more reliable means of defense against chemical weapons and more sensitive and accurate means of detecting chemical agents.

Chemical warfare agents Edit

A chemical used in warfare is called a chemical warfare agent (CWA). About 70 different chemicals have been used or stockpiled as chemical warfare agents during the 20th and 21st centuries. These agents may be in liquid, gas or solid form. Liquid agents that evaporate quickly are said to be volatile or have a high vapor pressure. Many chemical agents are made volatile so they can be dispersed over a large region quickly. [ citation needed ] [30]

The earliest target of chemical warfare agent research was not toxicity, but development of agents that can affect a target through the skin and clothing, rendering protective gas masks useless. In July 1917, the Germans employed sulfur mustard. Mustard agents easily penetrates leather and fabric to inflict painful burns on the skin.

Chemical warfare agents are divided into lethal and incapacitating categories. A substance is classified as incapacitating if less than 1/100 of the lethal dose causes incapacitation, e.g., through nausea or visual problems. The distinction between lethal and incapacitating substances is not fixed, but relies on a statistical average called the LD50.

Persistency Edit

Chemical warfare agents can be classified according to their persistency, a measure of the length of time that a chemical agent remains effective after dissemination. Chemical agents are classified as persistent or nonpersistent.

Agents classified as nonpersistent lose effectiveness after only a few minutes or hours or even only a few seconds. Purely gaseous agents such as chlorine are nonpersistent, as are highly volatile agents such as sarin. Tactically, nonpersistent agents are very useful against targets that are to be taken over and controlled very quickly.

Apart from the agent used, the delivery mode is very important. To achieve a nonpersistent deployment, the agent is dispersed into very small droplets comparable with the mist produced by an aerosol can. In this form not only the gaseous part of the agent (around 50%) but also the fine aerosol can be inhaled or absorbed through pores in the skin.

Modern doctrine requires very high concentrations almost instantly in order to be effective (one breath should contain a lethal dose of the agent). To achieve this, the primary weapons used would be rocket artillery or bombs and large ballistic missiles with cluster warheads. The contamination in the target area is only low or not existent and after four hours sarin or similar agents are not detectable anymore.

By contrast, persistent agents tend to remain in the environment for as long as several weeks, complicating decontamination. Defense against persistent agents requires shielding for extended periods of time. Non-volatile liquid agents, such as blister agents and the oily VX nerve agent, do not easily evaporate into a gas, and therefore present primarily a contact hazard.

The droplet size used for persistent delivery goes up to 1 mm increasing the falling speed and therefore about 80% of the deployed agent reaches the ground, resulting in heavy contamination. Deployment of persistent agents is intended to constrain enemy operations by denying access to contaminated areas.

Possible targets include enemy flank positions (averting possible counterattacks), artillery regiments, command posts or supply lines. Because it is not necessary to deliver large quantities of the agent in a short period of time, a wide variety of weapons systems can be used.

A special form of persistent agents are thickened agents. These comprise a common agent mixed with thickeners to provide gelatinous, sticky agents. Primary targets for this kind of use include airfields, due to the increased persistency and difficulty of decontaminating affected areas.

Classes Edit

Chemical weapons are agents that come in four categories: choking, blister, blood and nerve. [31] The agents are organized into several categories according to the manner in which they affect the human body. The names and number of categories varies slightly from source to source, but in general, types of chemical warfare agents are as follows:

    (GF) (GB) (GD) (GA)
  • Some insecticides agents
    (pinpoint pupils)
  • Blurred/dim vision
  • Headache
  • Nausea, vomiting, diarrhea
  • Copious secretions/sweating
  • Muscle twitching/fasciculations
  • Loss of consciousness
  • Vapors: seconds to minutes
  • Skin: 2 to 18 hours
  • Most Arsines
  • Arsine: Causes intravascular hemolysis that may lead to renal failure.
  • Cyanogen chloride/hydrogen cyanide: Cyanide directly prevents cells from using oxygen. The cells then use anaerobic respiration, creating excess lactic acid and metabolic acidosis.
  • Possible cherry-red skin
  • Possible cyanosis
  • Confusion
  • Nausea
  • Patients may gasp for air
  • Seizures prior to death
    (HD, H) (HN-1, HN-2, HN-3) (L) (CX)
  • Severe skin, eye and mucosal pain and irritation
  • Skin erythema with large fluid blisters that heal slowly and may become infected , conjunctivitis, corneal damage
  • Mild respiratory distress to marked airway damage
  • Mustards: Vapors: 4 to 6 hours, eyes and lungs affected more rapidly Skin: 2 to 48 hours
  • Lewisite: Immediate
  • Airway irritation
  • Eye and skin irritation , cough
  • Sore throat
  • Chest tightness
  • Wheezing
  • May appear as mass drugintoxication with erratic behaviors, shared realistic and distinct hallucinations, disrobing and confusion (lack of coordination) (dilated pupils)
  • Dry mouth and skin
  • Inhaled: 30 minutes to 20 hours
  • Skin: Up to 36 hours after skin exposure to BZ. Duration is typically 72 to 96 hours.

Non-living biological proteins, such as:

  • Latent period of 4-8 hours, followed by flu-like signs and symptoms
  • Progress within 18-24 hours to:
    • Inhalation: nausea, cough, dyspnea, pulmonary edema
    • Ingestion: Gastrointestinal hemorrhage with emesis and bloody diarrhea eventual liver and kidney failure.

    There are other chemicals used militarily that are not scheduled by the Chemical Weapons Convention, and thus are not controlled under the CWC treaties. These include:

      and herbicides that destroy vegetation, but are not immediately toxic or poisonous to human beings. Their use is classified as herbicidal warfare. Some batches of Agent Orange, for instance, used by the British during the Malayan Emergency and the United States during the Vietnam War, contained dioxins as manufacturing impurities. Dioxins, rather than Agent Orange itself, have long-term cancer effects and for causing genetic damage leading to serious birth defects. or explosive chemicals (such as napalm, extensively used by the United States during the Korean War and the Vietnam War, or dynamite) because their destructive effects are primarily due to fire or explosive force, and not direct chemical action. Their use is classified as conventional warfare. , bacteria, or other organisms. Their use is classified as biological warfare. Toxins produced by living organisms are considered chemical weapons, although the boundary is blurry. Toxins are covered by the Biological Weapons Convention.

    Designations Edit

    Most chemical weapons are assigned a one- to three-letter "NATO weapon designation" in addition to, or in place of, a common name. Binary munitions, in which precursors for chemical warfare agents are automatically mixed in shell to produce the agent just prior to its use, are indicated by a "-2" following the agent's designation (for example, GB-2 and VX-2).

    Some examples are given below:

    Delivery Edit

    The most important factor in the effectiveness of chemical weapons is the efficiency of its delivery, or dissemination, to a target. The most common techniques include munitions (such as bombs, projectiles, warheads) that allow dissemination at a distance and spray tanks which disseminate from low-flying aircraft. Developments in the techniques of filling and storage of munitions have also been important.

    Although there have been many advances in chemical weapon delivery since World War I, it is still difficult to achieve effective dispersion. The dissemination is highly dependent on atmospheric conditions because many chemical agents act in gaseous form. Thus, weather observations and forecasting are essential to optimize weapon delivery and reduce the risk of injuring friendly forces. [ citation needed ]

    Dispersion Edit

    Dispersion is placing the chemical agent upon or adjacent to a target immediately before dissemination, so that the material is most efficiently used. Dispersion is the simplest technique of delivering an agent to its target. The most common techniques are munitions, bombs, projectiles, spray tanks and warheads.

    World War I saw the earliest implementation of this technique. The actual first chemical ammunition was the French 26 mm cartouche suffocante rifle grenade, fired from a flare carbine. It contained 35g of the tear-producer ethyl bromoacetate, and was used in autumn 1914 – with little effect on the Germans.

    The Germans contrarily tried to increase the effect of 10.5 cm shrapnel shells by adding an irritant – dianisidine chlorosulfonate. Its use went unnoticed by the British when it was used against them at Neuve Chapelle in October 1914. Hans Tappen, a chemist in the Heavy Artillery Department of the War Ministry, suggested to his brother, the Chief of the Operations Branch at German General Headquarters, the use of the tear-gases benzyl bromide or xylyl bromide.

    Shells were tested successfully at the Wahn artillery range near Cologne on January 9, 1915, and an order was placed for 15 cm howitzer shells, designated ‘T-shells’ after Tappen. A shortage of shells limited the first use against the Russians at Bolimów on January 31, 1915 the liquid failed to vaporize in the cold weather, and again the experiment went unnoticed by the Allies.

    The first effective use were when the German forces at the Second Battle of Ypres simply opened cylinders of chlorine and allowed the wind to carry the gas across enemy lines. While simple, this technique had numerous disadvantages. Moving large numbers of heavy gas cylinders to the front-line positions from where the gas would be released was a lengthy and difficult logistical task.

    Stockpiles of cylinders had to be stored at the front line, posing a great risk if hit by artillery shells. Gas delivery depended greatly on wind speed and direction. If the wind was fickle, as at Loos, the gas could blow back, causing friendly casualties.

    Gas clouds gave plenty of warning, allowing the enemy time to protect themselves, though many soldiers found the sight of a creeping gas cloud unnerving. This made the gas doubly effective, as, in addition to damaging the enemy physically, it also had a psychological effect on the intended victims.

    Another disadvantage was that gas clouds had limited penetration, capable only of affecting the front-line trenches before dissipating. Although it produced limited results in World War I, this technique shows how simple chemical weapon dissemination can be.

    Shortly after this "open canister" dissemination, French forces developed a technique for delivery of phosgene in a non-explosive artillery shell. This technique overcame many of the risks of dealing with gas in cylinders. First, gas shells were independent of the wind and increased the effective range of gas, making any target within reach of guns vulnerable. Second, gas shells could be delivered without warning, especially the clear, nearly odorless phosgene—there are numerous accounts of gas shells, landing with a "plop" rather than exploding, being initially dismissed as dud high explosive or shrapnel shells, giving the gas time to work before the soldiers were alerted and took precautions.

    The major drawback of artillery delivery was the difficulty of achieving a killing concentration. Each shell had a small gas payload and an area would have to be subjected to saturation bombardment to produce a cloud to match cylinder delivery. A British solution to the problem was the Livens Projector. This was effectively a large-bore mortar, dug into the ground that used the gas cylinders themselves as projectiles – firing a 14 kg cylinder up to 1500 m. This combined the gas volume of cylinders with the range of artillery.

    Over the years, there were some refinements in this technique. In the 1950s and early 1960s, chemical artillery rockets and cluster bombs contained a multitude of submunitions, so that a large number of small clouds of the chemical agent would form directly on the target.

    Thermal dissemination Edit

    Thermal dissemination is the use of explosives or pyrotechnics to deliver chemical agents. This technique, developed in the 1920s, was a major improvement over earlier dispersal techniques, in that it allowed significant quantities of an agent to be disseminated over a considerable distance. Thermal dissemination remains the principal method of disseminating chemical agents today.

    Most thermal dissemination devices consist of a bomb or projectile shell that contains a chemical agent and a central "burster" charge when the burster detonates, the agent is expelled laterally.

    Thermal dissemination devices, though common, are not particularly efficient. First, a percentage of the agent is lost by incineration in the initial blast and by being forced onto the ground. Second, the sizes of the particles vary greatly because explosive dissemination produces a mixture of liquid droplets of variable and difficult to control sizes.

    The efficacy of thermal detonation is greatly limited by the flammability of some agents. For flammable aerosols, the cloud is sometimes totally or partially ignited by the disseminating explosion in a phenomenon called flashing. Explosively disseminated VX will ignite roughly one third of the time. Despite a great deal of study, flashing is still not fully understood, and a solution to the problem would be a major technological advance.

    Despite the limitations of central bursters, most nations use this method in the early stages of chemical weapon development, in part because standard munitions can be adapted to carry the agents.

    Aerodynamic dissemination Edit

    Aerodynamic dissemination is the non-explosive delivery of a chemical agent from an aircraft, allowing aerodynamic stress to disseminate the agent. This technique is the most recent major development in chemical agent dissemination, originating in the mid-1960s.

    This technique eliminates many of the limitations of thermal dissemination by eliminating the flashing effect and theoretically allowing precise control of particle size. In actuality, the altitude of dissemination, wind direction and velocity, and the direction and velocity of the aircraft greatly influence particle size. There are other drawbacks as well ideal deployment requires precise knowledge of aerodynamics and fluid dynamics, and because the agent must usually be dispersed within the boundary layer (less than 200–300 ft above the ground), it puts pilots at risk.

    Significant research is still being applied toward this technique. For example, by modifying the properties of the liquid, its breakup when subjected to aerodynamic stress can be controlled and an idealized particle distribution achieved, even at supersonic speed. Additionally, advances in fluid dynamics, computer modeling, and weather forecasting allow an ideal direction, speed, and altitude to be calculated, such that warfare agent of a predetermined particle size can predictably and reliably hit a target.

    Protection against chemical warfare Edit

    Ideal protection begins with nonproliferation treaties such as the Chemical Weapons Convention, and detecting, very early, the signatures of someone building a chemical weapons capability. These include a wide range of intelligence disciplines, such as economic analysis of exports of dual-use chemicals and equipment, human intelligence (HUMINT) such as diplomatic, refugee, and agent reports photography from satellites, aircraft and drones (IMINT) examination of captured equipment (TECHINT) communications intercepts (COMINT) and detection of chemical manufacturing and chemical agents themselves (MASINT).

    If all the preventive measures fail and there is a clear and present danger, then there is a need for detection of chemical attacks, [32] collective protection, [33] [34] [35] and decontamination. Since industrial accidents can cause dangerous chemical releases (e.g., the Bhopal disaster), these activities are things that civilian, as well as military, organizations must be prepared to carry out. In civilian situations in developed countries, these are duties of HAZMAT organizations, which most commonly are part of fire departments.

    Detection has been referred to above, as a technical MASINT discipline specific military procedures, which are usually the model for civilian procedures, depend on the equipment, expertise, and personnel available. When chemical agents are detected, an alarm needs to sound, with specific warnings over emergency broadcasts and the like. There may be a warning to expect an attack.

    If, for example, the captain of a US Navy ship believes there is a serious threat of chemical, biological, or radiological attack, the crew may be ordered to set Circle William, which means closing all openings to outside air, running breathing air through filters, and possibly starting a system that continually washes down the exterior surfaces. Civilian authorities dealing with an attack or a toxic chemical accident will invoke the Incident Command System, or local equivalent, to coordinate defensive measures. [35]

    Individual protection starts with a gas mask and, depending on the nature of the threat, through various levels of protective clothing up to a complete chemical-resistant suit with a self-contained air supply. The US military defines various levels of MOPP (mission-oriented protective posture) from mask to full chemical resistant suits Hazmat suits are the civilian equivalent, but go farther to include a fully independent air supply, rather than the filters of a gas mask.

    Collective protection allows continued functioning of groups of people in buildings or shelters, the latter which may be fixed, mobile, or improvised. With ordinary buildings, this may be as basic as plastic sheeting and tape, although if the protection needs to be continued for any appreciable length of time, there will need to be an air supply, typically an enhanced gas mask. [34] [35]

    Decontamination Edit

    Decontamination varies with the particular chemical agent used. Some nonpersistent agents, including most pulmonary agents (chlorine, phosgene, and so on), blood gases, and nonpersistent nerve gases (e.g., GB), will dissipate from open areas, although powerful exhaust fans may be needed to clear out buildings where they have accumulated.

    In some cases, it might be necessary to neutralize them chemically, as with ammonia as a neutralizer for hydrogen cyanide or chlorine. Riot control agents such as CS will dissipate in an open area, but things contaminated with CS powder need to be aired out, washed by people wearing protective gear, or safely discarded.

    Mass decontamination is a less common requirement for people than equipment, since people may be immediately affected and treatment is the action required. It is a requirement when people have been contaminated with persistent agents. Treatment and decontamination may need to be simultaneous, with the medical personnel protecting themselves so they can function. [36]

    There may need to be immediate intervention to prevent death, such as injection of atropine for nerve agents. Decontamination is especially important for people contaminated with persistent agents many of the fatalities after the explosion of a WWII US ammunition ship carrying sulfur mustard, in the harbor of Bari, Italy, after a German bombing on December 2, 1943, came when rescue workers, not knowing of the contamination, bundled cold, wet seamen in tight-fitting blankets.

    For decontaminating equipment and buildings exposed to persistent agents, such as blister agents, VX or other agents made persistent by mixing with a thickener, special equipment and materials might be needed. Some type of neutralizing agent will be needed e.g. in the form of a spraying device with neutralizing agents such as Chlorine, Fichlor, strong alkaline solutions or enzymes. In other cases, a specific chemical decontaminant will be required. [35]

    The study of chemicals and their military uses was widespread in China and India. The use of toxic materials has historically been viewed with mixed emotions and moral qualms in the West. The practical and ethical problems surrounding poison warfare appeared in ancient Greek myths about Hercules' invention of poison arrows and Odysseus's use of toxic projectiles. There are many instances of the use of chemical weapons in battles documented in Greek and Roman historical texts the earliest example was the deliberate poisoning of Kirrha's water supply with hellebore in the First Sacred War, Greece, about 590 BC. [37]

    One of the earliest reactions to the use of chemical agents was from Rome. Struggling to defend themselves from the Roman legions, Germanic tribes poisoned the wells of their enemies, with Roman jurists having been recorded as declaring "armis bella non venenis geri", meaning "war is fought with weapons, not with poisons." Yet the Romans themselves resorted to poisoning wells of besieged cities in Anatolia in the 2nd century BCE. [38]

    Before 1915 the use of poisonous chemicals in battle was typically the result of local initiative, and not the result of an active government chemical weapons program. There are many reports of the isolated use of chemical agents in individual battles or sieges, but there was no true tradition of their use outside of incendiaries and smoke. Despite this tendency, there have been several attempts to initiate large-scale implementation of poison gas in several wars, but with the notable exception of World War I, the responsible authorities generally rejected the proposals for ethical reasons or fears of retaliation.

    For example, in 1854 Lyon Playfair (later 1st Baron Playfair, GCB, PC, FRS (1818–1898), a British chemist, proposed using a cacodyl cyanide-filled artillery shell against enemy ships during the Crimean War. The British Ordnance Department rejected the proposal as "as bad a mode of warfare as poisoning the wells of the enemy."

    Efforts to eradicate chemical weapons Edit

    Countries with known or possible chemical weapons, as of 2013 [ needs update ]
    Nation CW Possession [ citation needed ] Signed CWC Ratified CWC
    Albania Known January 14, 1993 [39] May 11, 1994 [39]
    China Probable January 13, 1993 April 4, 1997
    Egypt Probable No No
    India Known January 14, 1993 September 3, 1996
    Iran Known January 13, 1993 November 3, 1997
    Israel Probable January 13, 1993 [40] No
    Japan Probable January 13, 1993 September 15, 1995
    Libya Known No January 6, 2004
    Myanmar (Burma) Possible January 14, 1993 [40] July 8, 2015 [41]
    North Korea Known No No
    Pakistan Probable January 13, 1993 October 28, 1997
    Russia Known January 13, 1993 November 5, 1997
    and Montenegro
    Probable No April 20, 2000
    Sudan Possible No May 24, 1999
    Syria Known No September 14, 2013
    Taiwan Possible n/a n/a
    United States Known January 13, 1993 April 25, 1997
    Vietnam Probable January 13, 1993 September 30, 1998
    • August 27, 1874: The Brussels Declaration Concerning the Laws and Customs of War is signed, specifically forbidding the "employment of poison or poisoned weapons", although the treaty was not adopted by any nation whatsoever and it never went into effect.
    • September 4, 1900: The First Hague Convention, which includes a declaration banning the "use of projectiles the object of which is the diffusion of asphyxiating or deleterious gases," enters into force.
    • January 26, 1910: The Second Hague Convention enters into force, prohibiting the use of "poison or poisoned weapons" in warfare.
    • February 6, 1922: After World War I, the Washington Arms Conference Treaty prohibited the use of asphyxiating, poisonous or other gases. It was signed by the United States, Britain, Japan, France, and Italy, but France objected to other provisions in the treaty and it never went into effect.
    • February 8, 1928: The Geneva Protocol enters into force, prohibiting the use of "asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices" and "bacteriological methods of warfare".

    Chemical weapon proliferation Edit

    Despite numerous efforts to reduce or eliminate them, some nations continue to research and/or stockpile chemical warfare agents. To the right is a summary of the nations that have either declared weapon stockpiles or are suspected of secretly stockpiling or possessing CW research programs. Notable examples include United States and Russia.

    In 1997, future US Vice President Dick Cheney opposed the signing ratification of a treaty banning the use of chemical weapons, a recently unearthed letter shows. In a letter dated April 8, 1997, then Halliburton-CEO Cheney told Sen. Jesse Helms, the chairman of the Senate Foreign Relations Committee, that it would be a mistake for America to join the convention. "Those nations most likely to comply with the Chemical Weapons Convention are not likely to ever constitute a military threat to the United States. The governments we should be concerned about are likely to cheat on the CWC, even if they do participate," reads the letter, [42] published by the Federation of American Scientists.

    The CWC was ratified by the Senate that same month. Since then, Albania, Libya, Russia, the United States, and India have declared over 71,000 metric tons of chemical weapon stockpiles, and destroyed about a third of them. Under the terms of the agreement, the United States and Russia agreed to eliminate the rest of their supplies of chemical weapons by 2012. Not having met its goal, the U.S. government estimates remaining stocks will be destroyed by 2017. [ citation needed ] [ needs update ]

    India Edit

    In June 1997, India declared that it had a stockpile of 1044 tons of sulphur mustard in its possession. India's declaration of its stockpile came after its entry into the Chemical Weapons Convention, that created the Organisation for the Prohibition of Chemical Weapons, and on January 14, 1993 India became one of the original signatories to the Chemical Weapons Convention. By 2005, from among six nations that had declared their possession of chemical weapons, India was the only country to meet its deadline for chemical weapons destruction and for inspection of its facilities by the Organisation for the Prohibition of Chemical Weapons. [43] [44] By 2006, India had destroyed more than 75 percent of its chemical weapons and material stockpile and was granted an extension to complete a 100 percent destruction of its stocks by April 2009. On May 14, 2009 India informed the United Nations that it has completely destroyed its stockpile of chemical weapons. [45]

    Iraq Edit

    The Director-General of the Organisation for the Prohibition of Chemical Weapons, Ambassador Rogelio Pfirter, welcomed Iraq's decision to join the OPCW as a significant step to strengthening global and regional efforts to prevent the spread and use of chemical weapons. The OPCW announced "The government of Iraq has deposited its instrument of accession to the Chemical Weapons Convention with the Secretary General of the United Nations and within 30 days, on 12 February 2009, will become the 186th State Party to the Convention". Iraq has also declared stockpiles of chemical weapons, and because of their recent accession is the only State Party exempted from the destruction time-line. [46]

    Japan Edit

    During the Second Sino-Japanese War (1937–1945) Japan stored chemical weapons on the territory of mainland China. The weapon stock mostly containing sulfur mustard-lewisite mixture. [47] The weapons are classified as abandoned chemical weapons under the Chemical Weapons Convention, and from September 2010 Japan has started their destruction in Nanjing using mobile destruction facilities in order to do so. [48]

    Russia Edit

    Russia signed into the Chemical Weapons Convention on January 13, 1993 and ratified it on November 5, 1995. Declaring an arsenal of 39,967 tons of chemical weapons in 1997, by far the largest arsenal, consisting of blister agents: Lewisite, Sulfur mustard, Lewisite-mustard mix, and nerve agents: Sarin, Soman, and VX. Russia met its treaty obligations by destroying 1 percent of its chemical agents by the 2002 deadline set out by the Chemical Weapons Convention, but requested an extension on the deadlines of 2004 and 2007 due to technical, financial, and environmental challenges of chemical disposal. Since, Russia has received help from other countries such as Canada which donated C$100,000, plus a further C$100,000 already donated, to the Russian Chemical Weapons Destruction Program. This money will be used to complete work at Shchuch'ye and support the construction of a chemical weapons destruction facility at Kizner (Russia), where the destruction of nearly 5,700 tons of nerve agent, stored in approximately 2 million artillery shells and munitions, will be undertaken. Canadian funds are also being used for the operation of a Green Cross Public Outreach Office, to keep the civilian population informed on the progress made in chemical weapons destruction activities. [49]

    As of July 2011, Russia has destroyed 48 percent (18,241 tons) of its stockpile at destruction facilities located in Gorny (Saratov Oblast) and Kambarka (Udmurt Republic) – where operations have finished – and Schuch'ye (Kurgan Oblast), Maradykovsky (Kirov Oblast), Leonidovka (Penza Oblast) whilst installations are under construction in Pochep (Bryansk Oblast) and Kizner (Udmurt Republic). [50] As August 2013, 76 percent (30,500 tons) were destroyed, [51] and Russia leaves the Cooperative Threat Reduction (CTR) Program, which partially funded chemical weapons destruction. [52]

    In September 2017, OPCW announced that Russia had destroyed its entire chemical weapons stockpile. [53]

    United States Edit

    On November 25, 1969, President Richard Nixon unilaterally renounced the use of chemical weapons and renounced all methods of biological warfare. He issued a decree halting the production and transport of all chemical weapons which remains in effect. From May 1964 to the early 1970s the USA participated in Operation CHASE, a United States Department of Defense program that aimed to dispose of chemical weapons by sinking ships laden with the weapons in the deep Atlantic. After the Marine Protection, Research, and Sanctuaries Act of 1972, Operation Chase was scrapped and safer disposal methods for chemical weapons were researched, with the U.S. destroying several thousand tons of sulfur mustard by incineration at the Rocky Mountain Arsenal, and nearly 4,200 tons of nerve agent by chemical neutralisation at Tooele Army Depot. [54]

    The U.S. ratified the Geneva Protocol which banned the use of chemical and biological weapons on January 22, 1975. In 1989 and 1990, the U.S. and the Soviet Union entered an agreement to both end their chemical weapons programs, including binary weapons. In April 1997, the United States ratified the Chemical Weapons Convention, this banned the possession of most types of chemical weapons. It also banned the development of chemical weapons, and required the destruction of existing stockpiles, precursor chemicals, production facilities, and their weapon delivery systems.

    The U.S. began stockpile reductions in the 1980s with the removal of outdated munitions and destroying its entire stock of 3-Quinuclidinyl benzilate (BZ or Agent 15) at the beginning of 1988. In June 1990 the Johnston Atoll Chemical Agent Disposal System began destruction of chemical agents stored on the Johnston Atoll in the Pacific, seven years before the Chemical Weapons Treaty came into effect. In 1986 President Ronald Reagan made an agreement with the Chancellor, Helmut Kohl to remove the U.S. stockpile of chemical weapons from Germany. In 1990, as part of Operation Steel Box, two ships were loaded with over 100,000 shells containing Sarin and VX were taken from the U.S. Army weapons storage depots such as Miesau and then-classified FSTS (Forward Storage / Transportation Sites) and transported from Bremerhaven, Germany to Johnston Atoll in the Pacific, a 46-day nonstop journey. [55]

    In May 1991, President George H. W. Bush committed the United States to destroying all of its chemical weapons and renounced the right to chemical weapon retaliation. In 1993, the United States signed the Chemical Weapons Treaty, which required the destruction of all chemical weapon agents, dispersal systems, and production facilities by April 2012. The U.S. prohibition on the transport of chemical weapons has meant that destruction facilities had to be constructed at each of the U.S.'s nine storage facilities. The U.S. met the first three of the four deadlines set out in the treaty, destroying 45% of its stockpile of chemical weapons by 2007. Due to the destruction of chemical weapons, under the United States policy of Proportional Response, an attack upon the United States or its Allies would trigger a force-equivalent counter-attack. Since the United States only maintains nuclear Weapons of Mass Destruction, it is the stated policy that the United States will regard all WMD attacks (Biological, chemical, or nuclear) as a nuclear attack and will respond to such an attack with a nuclear strike. [56]

    As of 2012, stockpiles have been eliminated at 7 of the 9 chemical weapons depots and 89.75% of the 1997 stockpile has been destroyed by the treaty deadline of April 2012. [57] Destruction will not begin at the two remaining depots until after the treaty deadline and will use neutralization, instead of incineration.

    Herbicidal warfare Edit

    Although herbicidal warfare use chemical substances, its main purpose is to disrupt agricultural food production and/or to destroy plants which provide cover or concealment to the enemy.

    The use of herbicides as a chemical weapon by the U.S. military during the Vietnam War has left tangible, long-term impacts upon the Vietnamese people that live in Vietnam. [58] [59] For instance, it led to 3 million Vietnamese people suffering health problems, one million birth defects caused directly by exposure to Agent Orange, and 24% of the area of Vietnam being defoliated. [60] The United States fought secret wars in Laos and Cambodia, dropping large quantities of Agent Orange in each of those countries. According on one estimate, the U.S. dropped 475,500 gallons of Agent Orange in Laos and 40,900 in Cambodia. [61] [62] [63] Because Laos and Cambodia were neutral during the Vietnam War, the U.S. attempted to keep secret its wars, including its bombing campaigns against those countries, from the American population and has largely avoided recognizing the debilitating effects on the people exposed at the time and the major birth defects caused for generations that followed. It also avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there. [62] [64]

    Anti-livestock Edit

    During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle. [65]

    Roughly, what percentage of arrows can be reused after a battle? - History

    Advanced Arrow Construction Rules

    Arrowhead Types








    Armor Piercing
    (Needle Bodkin)

    Cage Fire

    Arrowhead Designs

    The shape and design of an arrowhead determine the base amount of damage it can inflict. Unless stated otherwise, all arrows inflict Thrust damage when they hit. Tri and Quad-bladed arrowheads are wider, and have more cutting planes and points so as to provide more damage. Note that, while it is possible to have arrowheads with more that four bladed edges, such a thing is not practical. The amount of damage one can achieve with a base arrowhead design peaks at four blades. Adding more points and edges increases damage negligibly, and may even serve to decrease penetration due to increased surface area.

    Flat Leaf: Triangular arrowhead with a wide, flat profile and two edges meeting at a sharp point. Also called two-bladed. 1D6+2 base damage.

    Three Blade: Refers to a pyramidal arrowhead with three cross-sectional edges meeting at a sharp point. Diameter of the arrowhead usually exceeds the diameter of the shaft. 1D6+4 base damage.

    Four Blade: Refers to a pyramidal arrowhead with four cross-sectional edges meeting at a sharp point. Diameter of the arrowhead usually exceeds the diameter of the shaft. 2D6 base damage.

    Field Points: The diameter of a field point never surpasses the diameter of the shaft. This arrowhead has no edges, but instead tapers into a conical point. The term “field point” pertains specifically to target arrowheads they are one and the same. The damage infliction is 1D6 base damage. A field point can either be cut directly from the material of the shaft or, in the case of using a different material for the arrowhead, capped in said material.

    Arrowhead Materials

    Stone: Arrowheads can be made of various types of stone: flint, obsidian, and granite being the most common. Most types of stone used for arrowhead making are fairly light weight but still affect distance. Arrows with stone arrowheads only travel ¾ of their normal range. This makes stone arrowheads the cheapest to buy.

    Wood: This is the standard material used for arrowheads. Typically, no bonuses or penalties apply. However, there are certain rare types of wood that do extra damage when used for arrowheads, generally due to extraordinary strength and density. Ironwood and Yellow wood are two such types. Arrowheads made of Ironwood inflict an extra +1D6 damage. Arrowheads made of Yellow wood inflict an extra +2 damage.

    Horn & Bone: Animal or monster parts are often an acceptable alternative to wood. Depending on the creature, an arrowhead of bone or horn can be inferior to wood or as effective as metal. Certain beasts have ultra dense horns and bones that would be excellent for arrowhead making, though acquiring the material may be difficult for obvious reasons. Supernatural creatures such as dragons, Pegasus, demons, unicorns, etc, have magically resilient bone and horn. Arrowheads made of a supernatural creature’s parts are usually light weight, more damaging, and more durable. See the list of properties for supernatural bone and horn.

    Metal: Metal arrowheads are very effective. They generally do more damage but most are small enough so that weight is negligibly affected. However, reduce arrow range by 75 feet when using three or four-bladed metal broad-tipped arrowheads and 40 feet if using metal bodkins or four-bladed cage fires. Different types of metal can be used, steel, iron and lead being most common. Silver arrowheads are common for those who battle werebeasts and vampires. Metal arrowheads don’t dull or break as easily as wooden arrowheads. They receive a +1D4 to damage if iron or lead, and +1D6 if steel.

    Standard: An ordinary arrow shaft with the same thickness from end to end.

    Barrel Tapered: An arrow shaft thickest in the center that tapers down on both ends. Sometimes used for distance shooting to lighten the arrow without reducing spine. Barrel Tapered arrows receive +70 ft. to range.

    Bob-Tailed: An arrow shaft that is thickest beginning at the arrowhead, and tapering toward the arrownock. No particular affect on arrow performance.

    Breasted: An arrow shaft where the last 7 to 10 inches of the nocked end (the breast) is tapered in order to improve flight characteristics. Especially good for use with longbows. +50 feet to range along with improved trajectory.

    Fluted: An arrow shaft with deep scoring and grooves that make it lighter, allowing it to travel farther. +30 ft. to range.

    Footed: An arrow with a hardwood piece joined to the point end, or foot, of the arrow shaft, by means of inlay work, to give the arrow greater durability and better balance. The footing helps to strengthen the arrow where breakage most commonly occurs, at the point. +15 S.D.C. to Durability.

    Shaft Length and Size

    Arrow shafts made for human-sized creatures generally range in length from 18 to 25 inches for short bows and 26 to 34 inches for longbows. Length has no actual effect on the penetration of an arrow. However, arrow length is important in relation to the type and size of the bow. A general rule of thumb is the larger the bow, the longer the arrow must be. Using an arrow of that is too long or too short for one’s bow is disadvantageous at best. The length of the arrow must correspond to the length of the bow according to a certain ratio. For every 0.1 meter discrepancy between the bow’s size in relation to the arrow length, the arrow will receive a -2 to strike. Use the below chart to determine appropriate penalties.

    Generally, arrow shafts are 10 mm in diameter. However, it is possible to get arrows with larger shafts that do more damage. These so-called “war arrows” are about 12.5 to 13 mm in diameter. Arrowheads for these types of shafts are also proportionally adjusted in size. This may not seem like a sizable difference, but it is enough for the arrow to do an additional +1D6 damage. War arrows are subject to all the modifications and limitations available to standard arrows.

    Bow Length to Arrow Length Chart
    1.0 m bow = 18-19 inch arrow (0.46-0.48 m)
    1.1 m bow = 19-20 inch arrow (0.48-0.50 m)
    1.2 m bow = 20-22 inch arrow (0.50-0.56 m)
    1.3 m bow = 22-24 inch arrow (0.56-0.61 m)
    1.4 m bow = 24-25 inch arrow (0.61-0.64 m)
    1.5 m bow = 26-27 inch arrow (0.66-0.69 m)
    1.6 m bow = 27-28 inch arrow (0.69-0.71 m)
    1.7 m bow = 28-29 inch arrow (0.71-0.74 m)
    1.8 m bow = 29-30 inch arrow (0.74-0.76 m)
    1.9 m bow = 30-31 inch arrow (0.76-0.79 m)
    2.0 m bow = 31-32 inch arrow (0.79-0.81 m)
    2.1 m bow = 32-33 inch arrow (0.81-0.84 m)
    2.2 m bow = 33-34 inch arrow (0.84-0.86 m)

    Shaft Materials (Shaftment)

    Wood types classified as “hard” are difficult to break and receive a durability of 20 S.D.C.

    Wood types classified as “moderately hard” receive a durability of 10 S.D.C.

    Wood types classified as “soft” are fairly easy to break and receive a durability of only 5 S.D.C.

    The following is a list of woods commonly used in arrow making. It should be noted that this is a brief list, and many other types of wood are used that are not mentioned here.

    Types of Wood:

    Hard Moderately Hard Soft

    (Impact RF 12, 50 S.D.C.)
    Yellow Wood
    (Impact RF 10, 45 S.D.C.)

    (light weight, +1 strike)

    Horn and Bone, as stated in the arrowhead materials section, are viable materials in arrow construction. Shafts of horn or bone provide additional bonuses when the materials are harvested from supernatural creatures. This is due to the creature’s magical/supernatural nature. Their bones and horns tend to be light weight, ultra dense, and magically active. Consequently, these materials can add a variety of enhancements to the arrow. Bone or horn from “mundane” animals or even intelligent non-supernatural creatures (i.e. other humanoids) is comparable to moderately hard wood (though GMs should use their discretion, due to size and shape considerations of bone). For a list of properties for supernatural bone and horn arrows, consult the list below.

    Metals such as steel are generally not used for arrow shafts. The first reason is because of the increased weight (they only travel a third of the distance, but are still effective for close range shots). The second reason is more complicated. Arrows possess a specific trait called spine. Spine is a measure of the stiffness of an arrow’s shaft. This stiffness, in relation to the arrow’s length, bow type and arrowhead weight, among other variables, determines how well the arrow can fly through the air. Arrow shafts that are too stiff or not stiff enough are simply not good arrows and will suffer from a wide range of problems (decreased range, inaccuracy, flawed release from the bow, etc.). Most metals tend to be too inflexible for use as shafts.

    Dwarves, being the master craftsmen that they are, can make arrows with metal shafts light enough for use and flexible enough for the arrow’s spine to not be adversely affected. However, these arrows cost 500% to 600% above market value. Alchemists can also make magically light weight and flexible metal-shafted arrows. These arrows can be made from various metals, though steel is most commonly used. Metal-shafted arrows usually have a Durability of 60-90 S.D.C. each.

    Fletch Length

    Fletches are the feathers located at the end of an arrow. They can range in length from 1 ¼ inches to 6 inches. The length of an arrow’s fletches affect the distance and speed of the arrow. Arrows with short fletches (1.25 to 3.0 inches) fly faster, giving the arrow a +1 to damage. Arrows with long fletches (3.1 to 6.0 inches) give the arrow +1 to strike..

    Number of Fletches

    The number of fletches on an arrow can range from one to four, though three is, by far, the most widely used. On three fletch arrows, each feather is set 120 degrees apart to allow for bow clearance. One of the three fletches is a cock feather, while the remaining two are hen feathers. The cock feather is the fletch that is positioned perpendicular to the bow when the arrow is nocked. It is also known as the index feather and is generally a different color from the hen feathers. One, two and four fletch arrows have no designated cock feathers to speak of. Consequently, they cannot be misnocked.

    Fletches serve to stabilize the arrow the more fletches, the straighter and farther the arrow flies. They can be acquired from the wing feathers of a variety of large birds goose, eagle, turkey and hawk being most common. All feathers must be taken from the same wing. Using left-wing and right-wing feathers on the same arrow will result in poor flight. However, it should be noted that more than two fletches on an arrow can adversely affect its elliptical trajectory, sacrificing altitude for a level flight path (penalties are at the GM’s discretion). In addition, more than three feathers will cause a dramatic increase in drag, which in turn will affect arrow performance. Certain supernatural creatures possess feathers that, if used as fletches, can give arrows special properties. Refer to the Supernatural Properties table below.

    One Fletch: Mono-fletch arrows have relatively unstable flights, but are reliable enough for close range use. Reduce range by 50% after all other range modifiers. -1 to strike.

    Two Fletches: Dual-fletch arrows are relatively more stable than one fletch arrows but still don’t have the range of three fletch arrows. Reduce range by 10%.

    Three Fletches: Tri-fletch arrows are the most popular types with good overall stability, accuracy, and fair trajectory. They are standard issue for most military archers.

    Four Fletches: Quad-fletch arrows have each feather spaced about 75-102 degrees apart to allow for bow clearance. This amount of fletching creates a lot of drag at the back of the arrow. Consequently, the arrow does a 1D6 less damage in the first 40 yards of flight, after which point it rapidly drops from the air.

    Fletches are very important for arrows. They serve to create a little bit of drag at the rear end of the arrow which stabilizes its flight. Without fletches, an arrow simply could not fly straight. The number of fletches on an arrow and the length of each fletch significantly affect an arrow’s performance. However, the shape of an arrow fletch does next to nothing. In reality, fletch shapes are just a matter of personal preference, though flu-flus can serve a specific purpose. The various fletch shapes are achieved through the use of cutting blades or hot wires. Note that it is common for fletches to be colorful, which helps in locating lost arrows.

    Tribal: A fletch shape common to many tribes of the Yin-Sloth Jungles. Similar to the straight fletch, in that the rear edge follows the natural shape of the feather’s barbs. However, the leading edge is cut or burned to form a triangle.

    Straight: The simplest fletch shape where both the leading and trailing edges follow the barbs of the vanes, and the edge of the feather is cut or burned parallel to form a parallelogram.

    Eastern: A fletch shape used predominantly by archers of the Eastern Territory. The rear edge follows the barbs of the vanes of the feather. The leading edge is cut or burnt somewhat rounded, sloping smoothly toward the head of the arrow.

    Shield: Also known as the Swineback, this is a fletch shape where leading edge is smoothly sloped toward the arrowhead (like the Eastern type), but the trailing edge is sloped against the barbs of the vanes. This makes each feather shaped like one-half of a knight's shield.

    Parabolic: A fletch shape where there is a smooth, bowl-shaped (parabolic) curve from front to back.

    Flu Flu: A broad, high fletch shape designed to slow an arrow rapidly after the first 30 yards, or so, and cause it to drop quickly. This is a good type of fletch to use on practice arrows or when one wants to feint an attack. Drag is increased by using four feathers, mounting them in a spiral pattern, and splitting the vanes. Arrows with flu-flu fletching do 2D6 less damage.

    An arrownock is the slot at the end of the arrow into which the bowstring fits. Most arrownocks are relatively the same in size and design. They can be made of a separate material than the shaftment, or they can be “self-nocks” which are cut directly from the material of the shaft. This has no actual bearing on game mechanics. Most nocks are approximately 3/8 of an inch deep and it is a common practice to reinforce a softwood arrow with an inlay of hardwood as the nock. However, bowstrings that are considerably thicker or thinner than 3/8 of an inch will require arrows with specially designed nocks suited for their unusual widths (most arrows are made for use by bows with 3/8 inch thick strings, the standard bowstring thickness).

    Special Features for Arrows

    Cresting: These are colorful markings placed on the shaft of the arrow. They serve to make a lost arrow easier to find and can also be used to denote who the arrow belongs to. Noble archers sometimes mark their arrows with very ornate cresting that can be read like heraldry. The cost of cresting depends on its detail.

    Greco: This is a special moisture repellant copper solution that can be applied to wooden arrowheads and shafts. Arrows coated in Greco will not rot in humid environments or suffer water damage. This solution is fairly common and easy to buy. Anyone with the Fletching skill will know how to make Greco.

    Fletch Dry: This is a white alchemical dry powder that is mixed with alcohol until it turns into a thin paste. A very fine layer is then applied to each fletch of one’s arrows and allowed to dry for a few hours. The fletches become waterproof for the next 2D4 days. They will shed water, won’t becoming distorted when wet, and won’t absorb humidity. Fletch Dry adds virtually no weight to the arrow and doesn’t stiffen the barbs of the feather.

    Interchangeable Arrowheads and Shaft: A shaft with an impermanent mount that allows different arrowheads to be taken off and fastened on. The advantage to this is obvious. The archer can use one shaft for multiple purposes by simply changing the arrowhead to better suit the situation. Rather than buy bundles of full arrows, the character can buy one of these special shafts and various complimentary arrowheads. There are four types of mounts on the shaft into which the arrowhead can be fastened. Each type of mount requires special arrowheads with compatible bases so that they can be connected.

    1. Tapered mounts consist of a thin point which is inserted into a tapered hole at the base of the arrowhead.

    2. Tennon mounts or “slide-in” mounts are small holes or grooves cut into the end of the shaft into which the base of the arrowhead (a long thin rod or flat insert) can be slide inside.

    3. Slide-On mounts are just what they sound like, mounts in which the arrowhead slides tightly over the full diameter of the shaft.

    4. Screw mounts require that the arrowhead is screwed into the shaft. The base of the arrowhead is a short, protruding rod threaded like a screw. The hole that forms the mount of the shaft is also threaded inside.

    Speed Nock: This is a special arrownock that allows for the arrow to be nocked quickly and without error. +1 to initiative.

    Bonuses from Masterful Craftsmenship

    Bows made by master bowyers or arrows made by master fletchers will impart certain bonuses due to the sheer craftsmanship of the work. These modifiers are added on top of all others. The definition of a master is anyone 15th level or higher in their chosen field. Such an accomplished individual is quite rare.

    Arrows made by a master fletcher receive a +2 to strike, +2 to damage, and +50 ft. to range.

    Bows made by a master bowyer receive a +2 to damage with arrows and +15 S.D.C. to Durability.

    Dwarven and Elven Manufactured Bows and Arrows

    Dwarves make the best equipment in the world. They are blacksmiths and craftsmen without peer. Yet, though dwarven weapons are, overall, superior to most elven weapons, elves excel in the area of bows and arrows. Dwarves don’t have a great deal of experience crafting bows and arrows as their race historically prefers to use close range melee weapons. However, due to their sheer skill and craftsmanship, they can still make devastating archery equipment. Elven crafted archery equipment tends to be lighter, more accurate, and farther reaching than dwarven equivalents. Still, bows and arrows of either make are expensive and hard to procure.

    +150% to standard price
    +300% (only if Elven)
    +450% (only if Elven)

    Arrow Accuracy
    +1 strike
    +2 strike
    +3 strike
    +4 strike
    +5 strike
    +200% to standard price
    +1000% (only if Elven)
    +1500% (only if Elven)

    Arrow Range
    +20 feet
    +40 feet
    +60 feet
    +80 feet
    +100 feet

    +150% to standard price

    +200% to standard price

    +250% to standard price

    +200% to standard price
    +800% (only if Elven)

    Kobold and Jotan Manufactured Bows and Arrows

    All Kobold and Jotan made weapons are of excellent quality and craftsmanship, second only to Dwarves. This extends to bows and arrows, though elves have also surpassed them in this area.

    +100% to standard price

    Arrow Accuracy
    +1 strike
    +2 strike
    +3 strike
    +150% to standard price

    Arrow Range
    +20 feet
    +40 feet
    +60 feet

    +100% to standard price

    +150% to standard price

    +200% to standard price

    +150% to standard price

    Price List for Arrows

    1 gold piece = 3 silver pieces or 10 bronze/brass pieces
    1 lbs. of gold = 2500 gold pieces

    2 bronze
    5 bronze
    1 silver
    1 silver
    1 silver

    3 bronze
    1 gold, 5 bronze
    1 gold
    1 gold
    1 gold
    1 gold, 2 silver
    2 gold

    Battle of Little Bighorn: Were the Weapons the Deciding Factor

    It may be that the Battle of the Little Bighorn is the most written about subject in American history. For more than 120 years, people have speculated about how Lieutenant Colonel George A. Custer and five companies of the 7th Cavalry were overwhelmed in southeastern Montana Territory by a combined force of Lakota and Cheyenne Indians on June 25, 1876. Yet, the controversy does not appear any closer to resolution today.

    A number of reasons have been given for the defeat: Custer disobeyed orders, disregarded the warnings of his scouts, violated the principles of warfare by dividing his command, was ambushed or was the victim of a conspiracy internal regimental jealousies caused the defeat the regiment was too tired to fight there were too many raw recruits or too many Indians the Indians had better weapons or the Army had defective guns. Most of the conjectures are moot, for they can be debated endlessly–with intellectual and emotional biases interfering with reasoned arguments. Given the nature of the evidence, however, one should be able to study the role the weapons played in the battle’s outcome with a modicum of objectivity.

    During the battle, the 7th Cavalry troopers were armed with the Springfield carbine Model 1873 and the Colt Single Action Army revolver Model 1873. Selection of the weapons was the result of much trial and error, plus official testing during 1871­73. The Ordnance Department staged field trials of 89 rifles and carbines, which included entries from Peabody, Spencer, Freeman, Elliot and Mauser. There were four primary contenders: the Ward-Burton bolt-action rifle the Remington rolling-block the ‘trapdoor’ Springfield and the Sharps, with its vertically sliding breechblock.

    Although repeating rifles such as the Spencer, Winchester and Henry had been available, particularly in the post-Civil War years, the Ordnance Department decided to use a single-shot system. It was selected instead of a repeating system because of manufacturing economy, ruggedness, reliability, efficient use of ammunition and similarity to European weapons systems. Ironically, the board of officers involved in the final selection included Major Marcus A. Reno, who would survive the 7th Cavalry’s 1876 debacle on the Little Bighorn.

    The guns were all tested for defective cartridges, endurance, accuracy, rapidity of fire, firing with excessive charges, and effects of dust and rust. The Springfield was the winner. The Model 1873 carried by the 7th Cavalry was a carbine that weighed 7 pounds and had an overall length of 41 inches. It used a .45-caliber copper-cased cartridge, a 405-grain bullet and a charge of 55 grains of black powder. The best effective range for this carbine was under 300 yards, but significant hits still could be scored out to 600 yards. A bullet was driven out of the muzzle at a velocity of about 1,200 feet per second, with 1,650 foot-pounds of energy. The trapdoor Springfield could hurl a slug more than 1,000 yards and, with proper training, could be fired with accuracy 12 to 15 times per minute.

    The Colt Single Action Army revolver was chosen over other Colts, Remingtons and Starrs. By 1871, the percussion cap models were being converted for use with metallic cartridges. Ordnance testing in 1874 narrowed the field to two final contenders: the Colt Single Action Army and the Smith & Wesson Schofield. The Schofield won only in speed of ejecting empty cartridges. The Colt won in firing, sanding and rust trials and had fewer, simpler and stronger parts. The Model ‘P’ had a barrel of 7.5 inches and fired six .45-caliber metallic cartridges with 28 grains of black powder. It had a muzzle velocity of 810 feet per second, with 400 foot-pounds of energy. Its effective range dropped off rapidly over 60 yards, however. The standard U.S. issue of the period had a blue finish, case-hardened hammer and frame, and walnut grips. The Colt became ubiquitous on the frontier. To the soldier it was a ‘thumb-buster,’ to the lawman a ‘peacemaker’ or ‘equalizer,’ and to the civilian a ‘hog leg’ or ‘plow-handle.’ The revolver was so strong and dependable that, with minor modifications, it was still being produced by the Colt Company into the 1980s.

    Overall, the soldiers were pleased with their weapons. Lieutenant James Calhoun of Company L wrote in his diary on July 1, 1874: ‘The new Springfield arms and ammunition were issued to the command today. They seem to give great satisfaction.’ Although most of the men drew the standard-issue weapons, it was their prerogative to purchase their own arms. George Custer carried a Remington .50-caliber sporting rifle with octagonal barrel and two revolvers that were not standard issue–possibly Webley British Bulldog, double-action, white-handled revolvers. Captain Thomas A. French of Company M carried a .50-caliber Springfield that his men called ‘Long Tom.’ Sergeant John Ryan, also of Company M, used a .45-caliber, 15-pound Sharps telescopic rifle, specially made for him. Private Henry A. Bailey of Company I had a preference for a Dexter Smith, breechloading, single-barreled shotgun.

    It is well-known that Custer’s men each brought a trapdoor Springfield and a Colt .45 to the Little Bighorn that June day in 1876. Identification of the Indian weapons is more uncertain. Participants claimed to have gone into battle with a plethora of arms–bows and arrows, ancient muzzleloaders, breechloaders and the latest repeating arms. Bows and arrows played a part in the fight. Some warriors said they lofted high-trajectory arrows to fall among the troopers while remaining hidden behind hill and vale. The dead soldiers found pincushioned with arrows, however, were undoubtedly riddled at close range after they were already dead or badly wounded. The long range at which most of the fighting occurred did not allow the bow and arrow a prominent role.

    Not until archaeological investigations were conducted on the battlefield during the 1980s did the extent to which the Indians used gunpowder weapons come to light. Modern firearm identification analysis revealed that the Indians had spoken the truth about the variety and number of weapons they carried. The Cheyenne warrior Wooden Leg went into battle with what he called a’six-shooter’ and later captured a Springfield carbine and 40 rounds of ammunition. The Miniconjou One Bull, Sitting Bull’s nephew, owned an old muzzleloader. The Hunkpapa Iron Hawk and the Cheyenne Big Beaver had only bows and arrows. Eagle Elk, an Oglala, started the battle with a Winchester. White Cow Bull, an Oglala, also claimed to have a repeater.

    There were 2,361 cartridges, cases and bullets recovered from the entire battlefield, which reportedly came from 45 different firearms types (including the Army Springfields and Colts, of course) and represented at least 371 individual guns. The evidence indicated that the Indians used Sharps, Smith & Wessons, Evans, Henrys, Winchesters, Remingtons, Ballards, Maynards, Starrs, Spencers, Enfields and Forehand & Wadworths, as well as Colts and Springfields of other calibers. There was evidence of 69 individual Army Springfields on Custer’s Field (the square-mile section where Custer’s five companies died), but there was also evidence of 62 Indian .44-caliber Henry repeaters and 27 Sharps .50-caliber weapons. In all, on Custer’s Field there was evidence of at least 134 Indian firearms versus 81 for the soldiers. It appears that the Army was outgunned as well as outnumbered.

    Survivors of the remaining seven companies of the 7th Cavalry asserted that the Indians were equipped with repeating rifles and mentioned Winchesters as often as not. Major Marcus Reno claimed: ‘The Indians had Winchester rifles and the column made a large target for them and they were pumping bullets into it.’ Although some white survivors claimed to be heavily outgunned, Private Charles Windolph of Company H was probably closest to the truth when he estimated that half the warriors carried bows and arrows, one-quarter of them carried a variety of old muzzleloaders and single-shot rifles, and one-quarter carried modern repeaters.

    The Winchester, in fact, was almost a duplicate of the repeater developed by B. Tyler Henry, who was to become superintendent at Oliver Winchester’s New Haven Arms Company. The success of Henry’s rifles ensured Winchester’s success, and the primary weapon carried by the Indians at the Little Bighorn was either Henry’s model or the slightly altered Winchester Model 1866. Both fired a .44-caliber Henry rimfire cartridge. The Henry used a 216-grain bullet with 25 grains of powder, while the Winchester used a 200-grain bullet with 28 grains of powder. Velocity was 1,125 feet per second, with 570 foot-pounds of energy. Cartridges were inserted directly into the front of the Henry magazine, while the Winchester 1866 had a spring cover on the right side of the receiver. The carbine and the rifle had a capacity of 13 and 17 cartridges respectively.

    Even though the board selected the Springfield as the top single-shot weapon, the Indians’ arms fared nearly as well in subsequent tests. The Springfields recorded 100 percent accuracy at 100 yards, but so did the Winchesters, Henrys, Sharps, Spencers and various muzzleloaders. At 300 yards, the Springfield .45-55 carbine’s accuracy dropped to 75 percent, while the repeaters fell to about 40 percent. Weapons such as the Springfield .50-70 rifle and the Sharps .45-70 rifle, however, still produced 100 percent accuracy at 300 yards. At 600 yards, both Springfields could still hit the mark 32 percent of the time, while the Winchesters and Henrys were almost useless at ranges over 300 yards.

    In effect, all of these weapons fared equally well at short ranges. The Army’s Springfields had an accuracy advantage over the Indians’ repeaters at medium ranges (200­500 yards), plus they were more rugged and durable. The long-range weapons the Indians had were too few (there is evidence of only one Sharps .45-70 at the battle) to make much of a difference. Their preponderance of repeaters increased the Indians’ firepower, but the repeaters were only good at short ranges. And the Indian narratives tell a story of a battle that, until the last desperate moments, was fought generally from long range (more than 500 yards)–a dubious advantage to the cavalrymen, since the relatively slow muzzle velocity of their Springfields meant a high trajectory that made chances of hitting anything slim.

    Overall, the pluses and minuses probably canceled each other out. It has been said that the 7th Cavalry might have won had it still used the seven-shot Spencers it carried at the Washita battle in 1868, but the Spencers were no better in range or accuracy than the Henrys or Winchesters, and they carried fewer bullets. The contention that the Springfields suffered from a significant number of extractor failures was not borne out. Only about 2 percent of the recovered specimens showed evidence of extractor problems. Custer has been criticized for not taking along a battery of Gatling guns, but General Nelson A. Miles commented on their usefulness: ‘I am not surprised that poor Custer declined’ taking them along, he said. ‘They are worthless for Indian fighting.’ Equipping the cavalry with another type of weapon probably would not have made much of a difference at the Little Bighorn.

    What, then, was the reason that the soldiers made such a poor showing during the West’s most famous Army-Indian battle? While Custer’s immediate command of 210 men was wiped out and more than 250 troopers and scouts were killed in the fighting on June 25-26, the Indians lost only about 40 or 50 men. The explanation appears to lie in the fact that weapons are no better than the men who use them. Marksmanship training in the frontier Army prior to the 1880s was almost nil. An Army officer recalled the 1870s with nostalgia. ‘Those were the good old days,’ he said. ‘Target practice was practically unknown.’ A penurious government allowed only about 20 rounds per year for training–a situation altered only because of the Custer disaster. And the 20 rounds of ammunition often were expended in firing at passing game rather than in sharpshooting. The 7th Cavalry was not hampered by new recruits, for only about 12 percent of the force could be considered raw. What handicapped the entire regiment, however, was inadequate training in marksmanship and fire discipline.

    It is a perplexing incongruity in a citizen-soldier army, but the vast majority of soldiers, when the time comes to kill, become conscientious objectors. It has been asserted that man is essentially a killer at heart, yet recent studies have found evidence quite to the contrary. Men, soldiers or not, simply have an innate resistance to killing. It is fairly well-established that when faced with danger, a man will usually respond by fight or flight. New studies, however, have argued that there are two other likely possibilities: posture or submit.

    It is the posturing that has increased with the introduction of firearms to the battlefield. It is almost impossible for a man to shirk battle when at arm’s length from an enemy wielding sword or pike, but it is easier to remain aloof at rifle range. One has other options besides immediate fight or flight. The Rebel yell or the Union ‘hurrah,’ for example, were simply means to bolster one’s courage while trying to frighten the enemy. The loud crack of the rifle also served the same purpose, filling a deep-seated need to posture–i.e., to put on a good show and scare the enemy, yet still leave the shooter far away from a hand-to-hand death struggle. In reality, those good shows were often harmless, with the rifleman firing over the heads of the enemy.

    Firing high has always been a problem, and it apparently does not stem solely from inadequate training. Soldiers and military historians from Ardant du Picq to Paddy Griffith and John Keegan have commented on the phenomenon. In Civil War battles, 200 to 1,000 men might stand, blasting away at the opposing lines at 30 to 50 yards distance, and only hit one or two men per minute. Commanders constantly admonished their troops to aim low and give the enemy a blizzard at his shins. Regardless, the men continued to fire high–sometimes intentionally, sometimes without consciously knowing what they were doing.

    In Vietnam, it was estimated that some firefights had 50,000 bullets fired for each soldier killed. In the Battle of the Rosebud, eight days before the Little Bighorn fight, General George Crook’s forces fired about 25,000 rounds and may have caused about 100 Indian casualties–about one hit for every 250 shots. One of the best showings ever made by soldiers was at Rorke’s Drift in an 1879 battle between the Zulus and the British infantry. There, surrounded, barricaded soldiers delivered volley after volley into dense masses of charging natives at point-blank range where it seemed that no shot could miss. The result: one hit for every 13 shots.

    Indeed, it was at times even difficult to get soldiers to fire at all. After the Battle of Gettysburg, 24,000 loaded muskets were recovered only 12,000 of them had been loaded more than once, 6,000 had from three to 10 rounds in the barrel, and one weapon had been loaded 23 times! One conclusion is that a great number of soldiers are simply posturing and not trying to kill the enemy.

    At the Little Bighorn, about 42,000 rounds were either expended or lost. At that rate, the soldiers hit one Indian for about every 840 shots. Since much of the ammunition was probably lost–Indians commented on capturing ammunition in cartridge belts and saddlebags–the hit rate must have been higher. Yet the results do not speak highly of a supposedly highly trained, ‘crack’ cavalry regiment.

    High fire very plainly took place at the Little Bighorn, most notably on Reno’s skirmish line in the valley. Troopers went into battle with 100 rounds of Springfield ammunition and 24 rounds of Colt ammunition. About 100 troopers on Reno’s line may have fired half of their ammunition toward the southern edge of the Indian village. The 5,000 bullets only hit one or two Indians, but they certainly damaged the lodges. A Hunkpapa woman, Moving Robe, claimed ‘the bullets shattered the tepee poles,’ and another Hunkpapa woman, Pretty White Buffalo, stated that ‘through the tepee poles their bullets rattled.’ The relatively low muzzle velocity of the Springfield meant that the soldier would have had to aim quite a bit over the head of an Indian for any chance to hit him at long distance. If the officers called for the sights to be set for 500 yards to hit Indians issuing from the village–and did not call for a subsequent sight adjustment–by the time the Indians approached to 300 yards, the bullets would be flying 12 feet over their heads. As a comparison, the modern M-16 round, traveling at 3,250 feet per second, has an almost flat trajectory, and the bullet will hit where it is aimed with very little sight adjustment.

    The soldiers’ difficulty in hitting their targets was also increased by the fact that the Indians stayed out of harm’s way for almost all of the battle. One archaeological field study located the Indian positions and discovered that nearly every location was 300 to 1,200 yards away from the troopers. Given the distances involved, the fact that soldiers tended to shoot high, the lack of marksmanship training and the conscious or subconscious posturing involved, it is not surprising that the troopers scored so few hits.

    Arguably, posturing has been a factor at every gunpowder battle, as it most likely was at the Little Bighorn–but how about submission? It was drummed into the common soldier that he should save the last bullet for himself. He supposedly would place his Colt to his head, pull the trigger and go to Fiddler’s Green, rather than take the chance of being captured alive. Custer had even requested that his wife, Elizabeth, who often rode with the cavalry, should be shot by an officer rather than chance being taken by the Indians. As strange as it may seem, even with this dread of being captured, surrender attempts were made at the Little Bighorn fight. Indian accounts tell of white men who, at the last second, threw their hands up in surrender and offered their guns to the onrushing warriors. The Lakotas and Cheyennes were not swayed.

    Given all these factors operating against the citizen-soldier, how could commanders ever go into battle expecting to win? The answer, again, lies not in the weapons the soldiers used, but in the soldiers themselves–and their officers.

    Dividing up a command in the near presence of an enemy may be an act to be avoided during large-scale maneuvers with army-sized units, but such is not the case during small-scale tactical cavalry maneuvers. Custer adhered to the principles for a successful engagement with a small, guerrilla-type, mobile enemy. Proven tactics called for individual initiative, mobility, maintaining the offensive, acting without delay, playing not for safety but to win, and fighting whenever the opportunity arose. It was accepted that Regular soldiers would never shirk an encounter even with a superior irregular force of enemies, and that division of force for an enveloping attack combined with a frontal assault was a preferable tactic. On a small scale, and up to a certain point, Custer did almost everything he needed to do to succeed.

    Problems arose, however, when tactics broke down from midlevel and small-scale, to micro-scale. According to then Brevet Major Edward S. Godfrey, fire discipline–the ability to control and direct deliberate, accurate, aimed fire–will decide every battle. No attack force, however strong, could reach a defensive line of steady soldiers putting out disciplined fire. The British army knew such was the case, as did Napoleon. Two irregular warriors could probably defeat three soldiers. However, 1,000 soldiers could probably beat 2,000 irregulars. The deciding factor was strength in unity–fire discipline. It was as Major Godfrey said: ‘Fire is everything, the rest is nothing.’

    Theoretically, on the Little Bighorn, with a small-scale defense in suitable terrain with an open field of fire of a few hundred yards, several companies of cavalrymen in close proximity and under strict fire control could have easily held off two or three times their number of Indian warriors. In reality, on the Little Bighorn, several companies of cavalrymen who were not in close proximity and had little fire control, with a micro-scale defense in unsuitable, broken terrain, could not hold off two or three times their number of Indian warriors.

    The breakdown stems from an attitude factor. Custer exhibited an arrogance, not necessarily of a personal nature, but rather as a part of his racial makeup. Racial experience may have influenced his reactions to the immediate situation of war. It was endemic in red vs. white modes of warfare and implies nothing derogatory to either side. Historically, Indians fled from large bodies of soldiers. It was Custer’s experience that it was much harder to find and catch an Indian than to actually fight him. Naturally influenced by his successful past experiences with small-unit tactics, Custer attacked. He was on the offensive. He knew he must remain on the offensive to be successful. Even after Reno had been repulsed, Custer was maneuvering, looking for another opportunity to attack.

    The positions that Custer’s dead were found in did not indicate a strong defensive setup. Even after the Indians had taken away the initiative, Custer’s mind-set was still on ‘attack.’ Although a rough, boxlike perimeter was formed, it appeared more a matter of circumstance than intent. Custer probably never realized that his men’s very survival was on the line, at least not until it was too late to remedy the situation. The men were not in good defensible terrain. They were not within mutual supporting distance. They were not under the tight fire control of their officers. Custer’s troopers were in detachments too small for a successful tactical stance. When the critical point was reached, the soldiers found themselves stretched beyond the physical and psychological limits of fight or posture–they had to flee or submit.

    Seemingly out of supporting distance of his comrades, the individual trooper found himself desperately alone. The ‘bunkie’ was not close enough. The first sergeant was far away. The lieutenant was nowhere to be seen. The trooper responded as well as he could have been expected to. He held his ground and fought, he fired into the air like an automaton, he ran, he gave up. Some stands were made, particularly on and within a radius of a few hundred yards of the knoll that became known as Custer Hill, where almost all of the Indian casualties occurred. When it came down to one-on-one, warrior versus soldier, however, the warrior was the better fighter.

    George Armstrong Custer may have done almost everything as prescribed. But it was not enough to overcome the combination of particular circumstances, some of his own making, arrayed against him that day. Inadequate training in marksmanship and poor fire discipline resulting from a breakdown in command control were major factors in the battle results. Neither Custer’s weapons nor those the Indians used against him were the cause of his defeat.

    This article was written by Greg Michno and originally appeared in the June 1998 issue of Wild West. For more great articles be sure to subscribe to Wild West magazine today!

    On patience, progressions and probability

    In January 2019, I wrote a story for Stadium titled “Here’s why patience is important after a recent coaching hire.” In the lede, I cited Indiana and Ohio State’s then-five-game losing streaks, plus Texas Tech’s recent three-game slide against unranked opponents.

    “Despite their current losing streaks, the future is bright for each program,” I wrote.

    Like Indiana’s free throw shooting over the last four years, I may have gone 2-for-3.

    Less than three months after I wrote that piece, Texas Tech lost in overtime in the national championship game. Two months after that story was published, Ohio State beat Indiana in the 2019 Big Ten Tournament. That came just a year after the Buckeyes finished tied for second in the Big Ten during coach Chris Holtmann’s inaugural season.

    Ohio State’s win over Indiana in the 2019 Big Ten Tournament was the Buckeyes’ final victory before Selection Sunday. After both schools went 8-12 in conference play, Ohio State (19-14 at the time) made the NCAA tournament as a No. 11 seed, while Indiana (then 17-15) was told, “No.” The Buckeyes then upset No. 6 seed and Big 12 tournament champion Iowa State, a team with three future NBA players. Recently, Ohio State spent a week or two as a projected No. 1 seed for the 2021 NCAA Tournament.

    After I wrote that piece, which preached patience, I had a low-major head coach reach out and tell me he enjoyed the story. As a recently hired national writer, I can promise you the goal of the story wasn’t “Find data that stresses the importance of giving underperforming coaches a long leash, then get a DM from a random America East head coach.” I truly believed anxious fan bases getting restless with coaches in the middle of Year 2 was probably unwise.

    The story highlighted coaches like Jay Wright (who didn’t reach 20 wins or the NCAA tournament until Year 4), Tony Bennett (who took Virginia to one NCAA tournament in his first four years, then earned a No. 1 seed in Year 5) and John Beilein (who won 21 games and made the NCAA tournament in Year 2 and Year 4, before playing for a national championship in Year 6). Right or wrong, these coaches were often identified by myself, and others, in 2019 and 2020 as examples of the type of trajectory that Archie Miller’s tenure with the Hoosiers could take.

    And sure, it technically still could, if Miller gets a fifth season in Bloomington and, if that is deemed good enough, then a sixth. But as Year 4 comes to a close, the results are at best inconclusive, and likely closer to failure than passing if judged on a pass/fail basis.

    Below is a line graph of every Big Ten head coach hired since the start of the 2015-16 season and his program’s end-of-season ranking, plus their team’s current ranking, as of Sunday, March 7. Note: Greg Gard replaced Bo Ryan during the 2015-16 season, then Gard was hired full-time after the 2016 season. The 2016 season was included as Year 1 for Gard for the purposes of the graphs below since he coached the entire Big Ten season and postseason.

    Every coach besides Miller and Rutgers coach Steve Pikiell is currently on pace to have a better ranking this season than last.

    Indiana’s KenPom rankings last three seasons will fall between the mid-30s and low-50s, and even Rutgers’ last two rankings (No. 28 last season, No. 33 as of Sunday night) are better than Indiana’s best ranking under Miller – No. 34.

    Below is another line graph of those same seven coaches, this time with each of their regular-season conference winning percentages in each year of their tenure.

    With the exception of Nebraska’s Fred Hoiberg, who has had the worst team in the conference in each of his first two seasons, Miller is the only coach in the group who hasn’t had one season with a conference winning percentage above .500. In fact, his trajectory is trending downward after this season’s 7-12 finish in the conference.

    His best conference record was a 9-9 campaign in 2018. Seven of Indiana’s Big Ten wins that season came against the 10th, 11th, 12th, 13th and 14th-place teams in the conference – all of whom were ranked 85th or worse in the final rankings.

    Holtmann, Illinois coach Brad Underwood and Michigan coach Juwan Howard have each won at least 80 percent of their conference games in a season since being hired, with Holtmann and Underwood, of course, being hired in the same cycle as Miller, which will forever link the three coaches and the respective decision makers at each school.

    The closest thing to a proof of concept of the Archie Miller era at Indiana is the lost 2020 NCAA Tournament, which Indiana was projected to make as a No. 10 seed at the time the season shut down, according to Bracket Matrix. Big-picture conversations about Miller’s tenure can often boil down to a question of whether or not Indiana would’ve made the NCAA tournament in 2020, or maybe more accurately, whether the Hoosiers would’ve been a No. 10 seed or a No. 11 seed or No. 12 seed.

    On the surface, that’s the front line of a battle in a restless base’s civil war that seems more fitting of a mid-level A-10 school than a proud one that’s in the Big Ten.

    Who knows where Indiana would’ve fallen on Selection Sunday last season. Maybe last season the Hoosiers could’ve pulled a 2019 Ohio State and beaten a higher-ranked opponent, turning a pedestrian season into a suddenly satisfying one that’s parlayed into an offseason full of optimism. Or maybe that only would’ve made the tenor of Indiana basketball conversations more harsh this season, as the Hoosiers would’ve made the NCAA tournament last season, then missed it this season, in what would’ve been a clear step back in 2021.

    Miller’s tenure at Indiana has been something of a long-tailed Rorschach test as it hasn’t been a resounding success and yet it hasn’t been an unmitigated disaster. There have been recruiting wins and some slight, year-over-year improvements in the team’s predictive metrics, such as those of But the most important returns – NCAA tournament appearances and wins, and Big Ten wins and top positions in the conference standings – are sorely lacking.

    The only ranked matchup Indiana has played in since Miller was hired was on Jan. 6, 2019, when No. 2 Michigan beat No. 21 Indiana 74-63. That’s it, that’s the list. There have been notable wins during Miller’s tenure, but almost all of them have been from the position of an underdog, an also-ran, an afterthought. But that’s often what Indiana has been as a program that has exited in the quarterfinals, or earlier, of the Big Ten tournament 17 times in the first 22 years of the tournament, with 2020 not included.

    In its NCAA Financial Report from the 2018-19 fiscal year, Indiana reported more than $127 million in revenue, $11.1 million of which came from men’s basketball ticket sales and another $1.2 million from parking and concessions. That’s roughly 10 percent of the athletic department’s budget coming directly from men’s basketball home games. There was $28.3 million in contributions that weren’t reported as being directly tied to one sport, but it’s probably a safe assumption that, behind the scenes, a healthy percentage was explicitly or implicitly tied to men’s basketball.

    Miller has a reported buyout of $10.35 million if he’s fired after this season, but the math can be more complicated if Indiana, potentially preparing for a 2021-22 school year with 100-percent capacity at home games, could lose significant revenue from ticket sales and contributions if fed-up fans are done with the Miller era. It seems highly unlikely that the school could stand to lose more than $10.35 million in donations, ticket sales, concessions and parking by keeping Miller, but that’s undoubtedly part of the equation that athletic director Scott Dolson and his staff will have to examine, regarding the future of the program.

    In the program’s present and recent past, it’s long removed from the successes of 1987, 1981 or 1976. added a program ranking metric prior to the 2020 season, which evaluates programs from the 1997 season through 2020. Indiana ranks 23rd nationally, one spot behind Purdue, two spots behind Michigan and three behind Illinois.

    In fact, Indiana ranks eighth among Big Ten programs. That’s in the bottom half of the conference.

    ESPN showed a graphic during a recent night of college basketball that examined where the blue-blood men’s basketball programs were projected in regards to the NCAA tournament bubble. The schools in the graphic included Duke, Kentucky and North Carolina. It also included Michigan State, but not Indiana.

    I have no desire right now to have a discussion about who is or isn’t a blue blood, or what makes a blue blood a blue blood – although that’s implicitly tied to any discussion about the program’s current status and its future – but the Worldwide Leader, or at least one of its production assistants, didn’t think that Indiana was one.

    Blue bloods make coaching changes after Year 2, like Kentucky did with Billy Gillispie in 2009, before hiring John Calipari. Or they make coaching changes after Year 3, as North Carolina did with Matt Doherty, before it hired Roy Williams. Each school poached one of the best active head coaches in the sport and was rewarded with a national championship within three seasons.

    Put another way: Archie Miller might coach his fifth season at Indiana in 2021-22. At Kentucky and North Carolina, five seasons is how long it took for each school to hire, then replace, their current coach’s predecessor and then have their current coach win a national title. Calipari won a title at Kentucky in the fifth season after Gillispie was hired and Williams won a title at North Carolina in the fifth season after Doherty was hired.

    While acknowledging Indiana’s likely NCAA tournament status last season, a theoretical fifth season for Miller would be bottom-lined by Indiana’s participation in the tournament, forget winning it.

    Kentucky also won the national championship in its first season under Tubby Smith, who admittedly inherited a roster that included five returners who were future NBA players, each of them part of a program that had been a No. 1 seed in each of Rick Pitino’s final three seasons.

    When Kansas had to replace Roy Williams, it hired Bill Self from Illinois, which had earned a No. 1 seed in Self’s first season, then a No. 4 seed in each of the next two. Self led Kansas to a national championship in Year 5.

    Say what you want about UConn and where it stands in the blue-blood conversation, but from its first national championship in 1999 to its fourth and most recent title in 2014, no other school won more than two titles. And say what you want about former UConn coach Kevin Ollie, but the Huskies won the national championship in his second season and he only coached four more seasons before being sent out of town.

    The best programs win quickly and they’re quick to move on.

    Archie Miller could theoretically turn Indiana into one of the best programs in the country if given the time, but the schools that currently hold that status don’t wait this long into a coach’s tenure to find out. If Miller returns to Indiana and if the reason is because Indiana worked backchannels and was unable to find an accomplished, sitting high-major head coach who wanted the job, then that tells you all you need to know about how the program is viewed by those who matter, regardless of how many banners are in view in the south entrance of Assembly Hall.

    Achievements [ edit ]

    Icon Achievement In-game description Actual requirements (if different) Gamerscore earned Trophy type (PS)
    Into FireRelieve a Blaze of its rod.Pick up a blaze rod from the ground.20GBronze
    OverkillDeal nine hearts of damage in a single hit.Damage can be dealt to any mob, even those that do not have nine hearts of health overall.30GBronze

    After A Long War, Can NYU and the Village Ever Make Peace?

    Greenwich Village was up in arms. New York University was seeking to expand again, this time on the south side of Washington Square Park, and the plan was not going over well in the historically low-rise neighborhood. Residents formed protest groups, pledging to "Save Washington Square," warning that NYU was on the verge of taking over the park. They framed the conflict as a battle for territory. "What we want to know is when NYU is going to put a stop to its expansion along Washington Square," a leader of a local group told the New York Times. "It has been known for years as a residential section, and we're going to see that it stays that way." A quarter-century before, the school's chancellor had admitted that he hoped the school would eventually surround the square, taking the park for its campus. It was 1948.

    Of course, the tug-of-war between the university and the neighborhood did not end there. The current conflagration between NYU and the denizens of the Village, over the university's plan to situate an additional 2 million square feet in a handful of new buildings on two superblocks it maxed out during a previous development struggle in the 1960s, is merely the latest feud in what has turned into a never-ending war.

    While tussles between universities and the localities they call home are nothing new—historians date these town-versus-gown struggles to the first time someone ever donned a collegiate robe—NYU's relationship with the Village has been particularly contentious. The school, with its ambitious growth, has been chewing off sections of the quirky, history-rich neighborhood slowly but steadily since the 1830s, causing an uproar at just about every step along the way. But NYU 2031, the current plan to rezone the superblocks and change their deed restrictions to add four more buildings over the next 17 years, has been a particularly bitter fight. NYU has grown to be one of the largest landowners in the city—by many accounts, it vies with the Catholic Church and Columbia as one of the top three—and years of expansion have turned many of the blocks surrounding the south and east sides of the park into a de facto campus for the school, with a ripple effect extending far beyond. The Village and NYU have grown strong in part because of their relationship to each other, but as vitriol over NYU's moves increases, have they finally outgrown each other?

    [Map via the Greenwich Village Society for Historic Preservation.]

    T he backlash against NYU 2031 began soon after the school articulated its desire for growth in the Village during a town hall meeting with the community in 2007. Dozens of local groups began aligning in opposition, urging the university to reconsider situating a substantial amount of an overall 6 million square feet of space in the area. "As it started to become clear that NYU was not thinking about diverting their growth from the Village to other locations, and shoehorning more in where there was already such a concentration of NYU facilities, the dynamic became more oppositional," says Andrew Berman, the executive director of the Greenwich Village Society for Historic Preservation. When the specific plans eventually made it to the local community board, CB2, they were widely denounced and voted down unanimously. "I have not met a single neighborhood resident who supports the neighborhood plan," says State Senator Brad Hoylman, who has been an opponent since he was the vice chair of CB2.

    But even though it has generated significant opposition from the area's politicians, including Assemblywoman Deborah Glick, the plan did receive the endorsement of the leader it perhaps needed most: Councilwoman Margaret Chin, whose eventual support paved the way for a nearly unanimous city council ruling in its favor.

    More strikingly, much of the university's own faculty have lined up squarely with the neighborhood to oppose the plan. Thirty-nine departments and schools out of 175 within NYU passed resolutions against it in 2012, most with nearly unanimous votes. The Stern School of Business, not exactly a hotbed of anti-development beatniks, voted 52 to 3 against the plan, citing concerns over the university's financing and the possibility of a default. With deeper reverberations, NYU President John Sexton, who in nearly 12 years in his current office has presided over an ambitious expansion of the school in places like Abu Dhabi and Shanghai, was the recipient of an unprecedented series of no-confidence votes by five of the school's various colleges in 2013, including its largest, the College of Arts and Science.

    All of this on top of the bad publicity—and scrutiny in the U.S. Senate—NYU received over its nonprofit status, as the school spent lavishly on bonuses, multi-million dollar apartments, and vacation home loans for a few academic stars while student debts increased. Under Sexton, tuition and fees have gone up from $27,000 to more than $43,000 (more than $60,000 including room and board) in ten years, making NYU one of the most expensive universities in the country. In spite of the tuition hikes, the school's own debt has ballooned, from $1.2 billion in 2002 to $2.8 billion in 2011, according to data compiled by the Times.

    Currently, the fate of NYU 2031 is hung up in state courts. In January, a judge invalidated much of the expansion by ruling that the Bloomberg administration had wrongfully turned over three parks to the university—the alienation of public parkland must be authorized by the state legislature—a decision that both NYU and its opponents claimed as partial victories. The plaintiffs, a consortium of 11 local groups lead by the Greenwich Village Society for Historic Preservation, the NYU Faculty Against the Sexton Plan, and the Historic Districts Council, charge that the judge ruled incorrectly on the nature of a fourth park. The city joined the university in appealing the ruling, arguing that the parkland, as defined by the judge, was not in fact parkland.

    The rezonings and other permits necessary for the plan had been approved under the strongly pro-development administration of Michael Bloomberg and shepherded through the council by Christine Quinn to the curiously one-sided vote of 44-1 in July 2012. Chin had decided to support the plan in exchange for some design concessions from NYU—a 17 percent reduction of the buildout's square footage above ground—and some kickbacks for the community, including a new preschool. (NYU's most recent newsletter featured a photo of the councilwoman at the new school, sitting on the floor with a toddler). But with the legal tangle underway—the plaintiffs secured the pro-bono counsel of Randy M. Mastro of Gibson, Dunn & Crutcher, a renowned litigator in the city who is currently representing Chris Christie in the fallout from Bridgegate—NYU 2031 is no longer guaranteed to move forward, at least in its current iteration. All of which raises the question of why, after more than six years of struggle, NYU never came up with a plan to build anywhere else.

    I ronically, NYU and the Village owe a significant part of their respective successes to each other. The neighborhood was just beginning to emerge from its period as a suburban backwater—a settlement outside of the dense city grid clustered below Wall Street that early New Yorkers used to escape from yellow fever outbreaks—when NYU situated one of its first buildings on the east side of Washington Square Park in 1835. The city had only recently converted the square, previously a potters field and a public gallows, into a parade ground and commons, and with new residents and a new park, the area began to develop quickly. Though the university did pursue the move to a more traditional campus in the Bronx around the turn of the 19th century, it found itself returning to the Village. Over the last 30 years, NYU has transformed itself from a second-thought commuter school into a global institution, gaining much prestige due to its roots in the cultural center that is the Village—and the school probably understood as much when it finally sold its Bronx campus in the 1970s.

    In the past few decades, its developments have galvanized significant neighborhood opposition. The construction of Bobst Library, the block-long sandstone building of 12 stories on the southeast corner of the park, was greeted by years of protests and a lawsuit from then councilman Ed Koch and legendary activist Jane Jacobs, after the city gave the school a zoning exception that allowed the building to rise more than twice its allowable height. (For anyone who doubts that history repeats itself, arguments against that plan were remarkably similar to those today, as critics argued the building was to "serve the trustees and not the people," lambasted its design, and denounced the university's "expansionist" bent.) The school completed the library in 1973, and it has been casting a shadow over much of the square ever since.

    More recently, other NYU projects have drawn the neighborhood's ire: a 13-story law school building that required the destruction of two historic houses, one a former home of Edgar Allen Poe, in 2003 (NYU agreed to recreate the Poe building's facade a few doors down) a new student building, the Kimmel Center, constructed in 2004 the reviled 26-story dorm on East 12th Street — the tallest building in the East Village — built in 2008. It's a long history of these type of developments that caused the Times to wonder if NYU was the "Villain of the Village" in 2001.

    These days, NYU almost wears the Village and its history, from the lifeless facade of the old St. Ann's church left standing in front of the 12th street dorm that replaced it, to the Poe house's drab reconstruction, to the historic Provincetown Playhouse. The playhouse, largely gutted on the inside, sits wedged in the middle of new construction, as NYU sought unsuccessfully to appease neighborhood preservationists when the school demolished most of the building that housed the theater in 2010.

    "What I find most maddening is they trade on their identification with Greenwich Village while negatively affecting the area," says Simeon Bankoff, the executive director of the Historic Districts Council.

    Compared to other neighborhoods, development in the Village has proven to be a tough sell for the university, due in no small part to a nonconformist bent that has attracted radicals, artists, and intellectuals for some 150 years, though the area has skewed more toward affluence of late. It's an offbeat identity that seems embedded in the fabric of the neighborhood, with the winding, off-grid streets that have swallowed many a tourist's afternoon. And no one has forgotten that it was the battleground of Jane Jacobs' infamous fight against Robert Moses in the 1960s, which defeated his ill-advised plans for the LOMEX (Lower Manhattan Expressway), a ten-lane highway that would have cut through 14 city blocks and the heart of the Village. Perhaps it is no coincidence that the judge's ruling hinges on the three strips of parkland, areas originally set aside as access ramps for Moses' highway, that were converted to greenspaces after the project flopped. The ghost of Jane Jacobs looms large in the Village.

    F or all the drama, higher education institutions do bring innumerable benefits to the areas they call home they can create thousands of jobs, help drive economic growth, and increase an area's cultural and intellectual capital. But they tend to piss off those living around them. "Where there are tensions, those tensions are often with immediate neighbors—they are often quite local," explains Henry S. Webber, a public policy professor and administrator at Washington University in St. Louis who has studied the role that colleges play in cities. "What universities and medical centers do is of great value to world, region, and city, but not entirely in the interest of their immediate neighbors."

    Like any big development project, at least some of the complaints directed at NYU inevitably derive from these more NIMBY-esque concerns. As a testament to how intertwined the Village and NYU have become over the years, the demographic most incensed by the project may be its faculty, roughly 40 percent of whom live on the two superblocks. If the university wanted to pick a fight with its professors, situating such an ambitious construction project outside their windows was a great way to do it.

    But most critics of NYU's plan can point to a deep history to ground their objections. The focus of the project's development is a three-by-three grid of nine square blocks that the city converted into three "superblocks" by eliminating two north-south streets—Wooster and Greene between Houston and West Fourth streets—at the behest of NYU as well as Robert Moses, after a bitter fight in 1954. Within 20 years, the school had acquired all three parcels. In 1967, the university constructed three 30-story towers on the southernmost superblock bound by Houston Street, and though they were designed by I.M Pei, the high-rises were extremely controversial in the neighborhood. Opponents of the 2031 plan contend that NYU is now violating the agreements it made in order to build those high-rises in the first place.

    "The plan of course is basically a way of saying that we're going to fill in the park between the towers," says Andrew Ross, a sociology professor at the school who does not live in the immediate area (he resides in Tribeca). As Ross points out, critics of the current plan find themselves in the curious position of triumphing Moses' designs for the area, which ultimately lead to the creation of the parkland in exchange for the towers. "People who were embittered about what Moses did would say now that they feel he'd be turning over in his grave at the prospect of the park being filled in."

    As Bankoff says: "Towers in the park don't work however if you get rid of a park. Then you just have towers upon towers."

    For all the controversy in the Village, there are signs that if the school had decided to build elsewhere, it would have been welcomed with open arms. Leaders in lower Manhattan, struggling to renew development after the financial crisis, contacted NYU in 2010, urging it to consider expanding into the Financial District. "It could have been a win-win for everyone," says Catherine McVay Hughes, currently the chair of Community Board 1, who says she and her predecessor, Julie Menin, met with NYU administrators about the possibility.

    Leaders in other boroughs, too, say they have sought out the school. "With the troubles they've had in Manhattan with expansion in the Village, I've offered them, urged them to say, why not consider Brooklyn?" says Marty Markowitz, who as Brooklyn's borough president until 2014, saw the number of college students in Downtown jump from 35,000 in 2006 to more than 57,000. "We're only—how many subway stops away? 2, 3, 4? It's around the corner practically from NYU."

    Markowitz says he pitched John Sexton and other NYU officials about relocating its Tisch School of the Arts. "I really do think it should be in Brooklyn," he says. "It may not be tomorrow, but they're going to need that space for something else in Manhattan. Look what Tisch does—acting, musical theater writing, film, television, photography, dramatic writing. Come on! That's Brooklyn! More than the Village, you bet!"

    NYU has made some moves into Brooklyn, merging with an engineering school, Polytechnic University, in 2008, and later purchasing a derelict MTA building on Jay Street to convert into a school of urban science. But though it plans to locate some of the new 6 million square feet in Brooklyn, critics contend it hasn't committed to moving enough of its core functions there.

    As they point out, the economic benefits brought by a school like NYU diminish in already thriving areas like Greenwich Village. A study commissioned by the plan's opponents in April 2012 found that the plan could serve as a "potent economic development tool" wherever it was situated, but that this upswing in sales would be significantly smaller scale in the Village—an increase of $23 million that would only account for a growth of about 2.5 percent, in contrast to 10 percent in a place like Downtown Brooklyn. In other words, NYU could potentially do for another neighborhood now what it did for the Village long ago.

    NYU maintains that it simply needs more space in the Village to meet its academic needs. "This is where it's hard for people who just do real estate," says Alicia Hurley, a vice president in the university's public affairs department. "You can't just create a whole second campus for things that are already happening at the square." The university's own studies, like those released last week by a 26-member working group of faculty, students, and administrators, have found that the school has an "urgent" need for additional space in its core, and that financially, NYU 2031 is "reasonable, prudent, and within the university's means."

    But some of the university's most stringent critics allege that NYU's plan amounts to little more than a real estate deal, a power play to increase its square footage in one of the more desirable areas in town, and therefore the value of its real estate holdings. The school's board of trustees includes some of the biggest real estate developers in the city (and, some professors claim, not a single educator). "We're talking about New York City, where real estate deals make the place run," says Mark Crispin Miller, a tenured Media Studies professor at the school. "Even though they keep saying it's for academic space, it's mostly not for academic space, but it's a very effective cover for what is actually an extremely radical construction." The zoning change for the area, which was approved by the rezoning-friendly City Planning Commission under Amanda Burden with only one "no" vote in June 2012, most likely increased the value of land by some hundreds of millions of dollars.

    "The plan is so clearly oversize that it's hard not to see it as a stalking horse for what school officials figure they can get permission from the city to build," wrote the Times' architecture critic, Michael Kimmelman, in a scathing review of a version of the plan.

    The school vigorously disputes these accusations. "It's not real estate, it's our academic mission that drives this," says Hurley.

    [Approved square footages for the buildings of NYU 2031.]

    Of course, opponents disagree with the school's claims that so much of its expansion must happen in the Village. "NYU has always claimed to be a university for the whole city. Well if that's true, why on earth do they need to cram 2 million square feet of towering real estate into a famously low-rise neighborhood?" says Miller. "One of the things they keep saying to justify building in the neighborhood is that students can't possibly walk more than ten minutes between their dorm and their classes. That doesn't sound like New York City to me. My son is 12 and takes the subway uptown to go to school every day."

    Most observers acknowledge that the new court ruling probably will not force NYU back to the drawing board, though it could mandate a few more compromises. De Blasio, for all his community spirit, has indicated that the city will not drop its appeal of the ruling.

    "The original plan, as public advocate I was opposed to because I thought it was too expansive. The city council passed a much smaller plan which I felt much better about," de Blasio told Curbed at a recent press conference. "The lawsuit is a different matter the lawsuit involves issues that go far beyond the issue of NYU, and from the city's perspective sets precedents that actually are very problematic." A spokeswoman from City Hall confirmed that the administration has no intention of dropping the appeal.

    The nine-block grid at the center of the conflict represents one of the fundamental tensions in the city, between developers seeking leniency from the city's regulations to build higher and denser, and average residents, who are right to complain that it is often much easier for well-connected (and deep-pocketed) institutions to get around the rules than it would be for them. The more jaded among us would say that's just how the city runs. But it's worth wondering why the city makes bargains and restrictions—like those to create the tower-in-the-park design on the two superblocks—if only to revisit them at the behest of the same developers a few decades later.

    Many other schools around the country have begun expanding their reach outside their immediate campuses as they have grown, and NYU has made some movements to do the same. Perhaps future conflicts could be avoided if it pursued these options more aggressively. In the meantime, it seems the school may eke out more space in the Village yet again. If that's the case, the current plan will recede from the spotlight, becoming just one of many development projects in a busy metropolis with a short memory, a few more headlines about a bitter battle between the Village and the school gathering dust in the archives. Both sides may take comfort that regardless of the outcome in the courts, the current struggle will soon draw to a close. It promises not to be the last.
    · NYU 2031 coverage [Curbed]
    · Curbed Features archive [Curbed]

    What persuades white Southerners to remove Confederate flags and monuments?

    Across the United States and around the world, record-breaking Black Lives Matter protests and political pressure are pushing governments to remove public flags and monuments celebrating the Confederacy and white supremacy more generally.

    Within the United States, white Southerners’ resistance remains the biggest obstacle to removing Confederate shrines. Many continue to argue that the monuments aren’t racially motivated, despite the fact that most were installed to celebrate and enforce Jim Crow. For example, 77 percent of white North Carolinians opposed removing the monuments in an Elon poll last fall, similar to the 80 percent of white Louisianans in 2016 in an LSU poll. What might persuade them?

    Can white Southerners learn from postwar Germany’s dismantling of Nazi monuments?

    White Southerners erected most Confederate monuments in the late 19th and early 20th centuries to celebrate their violent victory over Reconstruction, a brief period when the federal government occupied the South to enforce the Constitution’s guarantees of racial equality. Authoritarian Jim Crow followed, in which the new white government and white supremacist groups violently enforced segregation and black disenfranchisement. The Confederate battle flag reemerged in the mid-20th century as a symbol of white resistance to the civil rights movement’s pursuit of black voting rights and desegregation.

    By contrast, after Germany’s military defeat in World War II, its postwar government systematically removed all public displays celebrating the Nazi regime, and focused its public history on remembering Nazi atrocities instead. In fact, Germany banned citizens from displaying Nazi symbols, too, arguing that those symbols’ implied violence outweighed the appeal of free speech. The government called this effort “de-Nazification.”

    We wanted to know whether comparing Confederate to Nazi symbols would persuade Americans to consider “de-Confederation.”

    How we did our research

    We commissioned the survey firm Lucid to conduct a nationally representative survey experiment in April 2,500 Americans responded, including 643 white Southerners. We randomly assigned a third of all respondents to read a conventional argument against the monuments, focused on how black Americans see them symbolizing injustices and pain and explaining the historical revisionism of the “Lost Cause.” That reading included a picture of a statue with a soldier standing with a Confederate battle flag.

    Luck and the Draw

    These two go hand in hand so perhaps this could be boiled down to just simply luck. But perhaps the biggest indicator of postseason success is the draw. In 2014, for example, the Hawkeyes ended up in a play-in game. They lost in overtime to Tennessee, who then benefited from a major gift on the draw. They advanced to take on an over-seeded UMass team before getting a Round of 32 matchup against the 14th seeded Mercer thanks to a Round 1 upset of Duke.

    Iowa can’t control the draw that lies in front of them, but they already did quite a bit to prepare for the most difficult matchups that could lie ahead. The Big Ten is clearly the best conference in America with 9 of the league’s 14 teams in the NCAA Tournament. Iowa is battle-tested against that league and five of their eight losses this season came against teams seeded 1 or 2 in the Tournament. That is to say, Iowa wouldn’t be facing an opponent of that caliber until the Elite 8 at the earliest.

    But Iowa did lose three other games to opponents who aren’t even in the tournament. As noted, two of those came without CJ Fredrick, but the Hawkeyes have to find a way to overcome a potentially tricky matchup through adaptability. A big piece of that comes back to the two controllables above: defense and attacking on offense.

    The other piece here is just plain luck. Iowa needs to have shots go down. Over the course of an entire season, we saw them do just that and the result is a 2 seed in the NCAA Tournament. But a number of Iowa’s losses came down to Iowa simply not shooting well.

    On the season, the Hawkeyes shot 39% from beyond the arc. That was good enough for 13th in the nation. In their losses, they shot just 32% from deep. Some of that can surely be attributed to great defense by opponents. Half of Iowa’s losses came to teams ranked inside the top-10 in KenPom’s adjusted defensive efficiency. But some of it is just plain luck.

    In the loss to Gonzaga, for example, Iowa shot just 18% from beyond the arc. The Zags have the 10th rated defense according to KenPom, so that makes some sense, but a more detailed look shows it really came down to Iowa simply not hitting shots.

    ShotQuality uses more than 90 variables to assess the quality of every shot taken in an NCAA game, including the average shooting percentage of the shooter, the shot distance, defensive closeout, and much, much more. Over the course of the season, the Hawkeyes lead the nation in ShotQuality score, which is to say on average they have good shooters taking good shots.

    That makes intuitive sense given the free flowing style of play that emphasizes extra passes to get shooters open looks and the results we’ve seen with one of the most efficient offenses in terms of points per possession in the modern era.

    But in several of Iowa’s losses, shots simply weren’t going in despite their very high ShotQuality score.

    Again, bad shooting nights happen. But early in the season a bad shooting night meant Iowa was virtually doomed. The Hawkeyes can overcome struggles on offense by controlling the two controllables. We some this against Wisconsin in the Big Ten Tournament.

    In that matchup, Iowa shot just 10% from beyond the arc, but they managed to get a win and advance. They did so almost exclusively by ratcheting up the defense and holding Wisconsin to an incredibly low 57 points. They kept the Badgers under 39% shooting from the floor and they came away with a win.

    Iowa needs to channel that defensive intensity and commit to getting to the free throw line if they want to overcome the inevitable cold night shooting. If they can pair that with their season average shooting, the sky is the limit.

    ShotQuality uses it’s offensive and defensive shot selection analytics to predict future outcomes as well. Using the algorithm, they’ve played out the NCAA Tournament based on the existing bracket and matchups at hand. Hawkeye fans will be happy with the results if Iowa can live up to the expectations they’ve built all season.