Mail armour provided an effective defence against slashing blows by edged weapons and penetration by thrusting and piercing weapons; in fact, a study conducted at the Royal Armouries at Leeds concluded that "it is almost impossible to penetrate using any conventional medieval weapon." Generally speaking, mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 18 to 14 gauge (1.021.63 mm diameter) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques. When the mail was not riveted, a well-placed thrust from a spear or thin sword could penetrate, and a pollaxe or halberd blow could break through the armour. Some evidence indicates that during armoured combat, the intention was to actually get around the armour rather than through itaccording to a study of skeletons found in Visby, Sweden, a majority of the skeletons showed wounds on less well protected legs.
The flexibility of mail meant that a blow would often injure the wearer, potentially causing serious bruising or fractures, and it was a poor defence against head trauma. Mail-clad warriors typically wore separate rigid helms over their mail coifs for head protection. Likewise, blunt weapons such as maces and warhammers could harm the wearer by their impact without penetrating the armour; usually a soft armour, such as gambeson, was worn under the hauberk. Medieval surgeons were very well capable of setting and caring for bone fractures resulting from blunt weapons. With the poor understanding of hygiene however, cuts that could get infected were much more of a problem. Thus mail armour proved to be sufficient protection in most situations.
In the 1970s and 1980s, medial capitals were adopted as a standard or alternative naming convention for multi-word identifiers in several programming languages. The precise origin of the convention in computer programming has not yet been settled. A 1954 conference proceedings informally referred to IBM's Speedcoding system as "SpeedCo". Christopher Strachey's paper on GPM (1965), shows a program that includes some medial capital identifiers, including "NextCh" and "WriteSymbol".
Multiple-word descriptive identifiers such as end of file or char table cannot be used in most popular programming languages because the spaces between the words would be parsed as delimiters between tokens. The alternative of running the words together as in endoffile or chartable may result in identifiers that are difficult to understand and perhaps even misleading; for example, chartable is ambiguous as it could mean "chart-able" (able to be charted) or "char table" (a table of characters).
Some early programming languages, notably Lisp (1958) and COBOL (1959), addressed this problem by allowing a hyphen ("-") to be used between words of compound identifiers, as in "END-OF-FILE": Lisp because it worked well with prefix notation (a Lisp parser would not treat a hyphen in the middle of a symbol as a subtraction operator) and COBOL because its operators were individual English words. This convention remains in use in these languages, and is also common in program names entered on a command line, as in Unix.
However, this solution was not adequate for mathematically-oriented languages such as FORTRAN (1955) and ALGOL (1958), which used the hyphen as an infix subtraction operator. These early languages instead allowed identifiers to have spaces in them, determining the end of the identifier by context. This approach was abandoned in later languages due to the complexity it adds to tokenization. (FORTRAN initially restricted identifiers to six characters or fewer, effectively preventing multi-word identifiers except those made of very short words.)
Exacerbating the problem was the fact that common punched card character sets of the time were uppercase only and lacked other special characters. It was only in the late 1960s that the widespread adoption of the ASCII character set made both lower case and the underscore character _ universally available. Some languages, notably C, promptly adopted underscores as word separators, and identifiers such as end_of_file are still prevalent in C programs and libraries (as well as in later languages influenced by C, like Perl and Python). However, some languages and programmers chose to avoid underscoresamong other reasons to prevent confusing them with whitespaceand adopted camel case instead.
Charles Simonyi, who worked at Xerox PARC in the 1970s and later oversaw the creation of Microsoft's Office suite of applications, invented and taught the use of Hungarian Notation, in which the lower case letter at the start of a (capitalized) variable name denotes its type. One account claims that the camel case style first became popular at Xerox PARC around 1978, with the Mesa programming language developed for the Xerox Alto computer. This machine lacked an underscore key, and the hyphen and space characters were not permitted in identifiers, leaving camel case as the only viable scheme for readable multiword names. The PARC Mesa Language Manual (1979) included a coding standard with specific rules for upper and lower camel case that was strictly followed by the Mesa libraries and the Alto operating system.
The Smalltalk language, which was developed originally on the Alto and became quite popular in the early 1980s, may have been instrumental in spreading the style outside PARC. Camel case was also used by convention for many names in the PostScript page description language (invented by Adobe Systems founder and ex-PARC scientist John Warnock), as well as for the language itself. In addition, Niklaus Wirth, the inventor of Pascal, came to appreciate camel case during a sabbatical at PARC and used it in Modula, his next programming language.