Orig­i­nal source pub­li­ca­tion: Teix­eira, A. and F. de Sá-Soares (2013). A Revised Frame­work of Infor­ma­tion Secu­rity Prin­ci­ples. In Fur­nell, S. M., N. L. Clarke and V. Katos (Eds.), Pro­ceed­ings of the Euro­pean Infor­ma­tion Secu­rity Multi-Con­fer­ence EISMC 2013—IFIP Infor­ma­tion Secu­rity Man­age­ment Work­shop. Lis­bon (Por­tu­gal).

A Revised Frame­work of Infor­ma­tion Secu­rity Prin­ci­ples

André Teix­eira and Fil­ipe de Sá-Soares

Cen­tro ALGO­RITMI, Depar­ta­mento de Sis­temas de Infor­mação, Uni­ver­si­dade do Minho, Guimarães (Por­tu­gal)

Abstract

Con­fi­den­tial­ity, Integrity and Avail­abil­ity are referred to as the basic prin­ci­ples of Infor­ma­tion Secu­rity. These prin­ci­ples have remained vir­tu­ally unchanged over time, but sev­eral authors argue they are clearly insuf­fi­cient to pro­tect infor­ma­tion. Oth­ers go a step fur­ther and pro­pose new secu­rity prin­ci­ples, to update and com­ple­ment the tra­di­tional ones. Prompt by this con­text, the aim of this work is to revise the frame­work of Infor­ma­tion Secu­rity prin­ci­ples, mak­ing it more cur­rent, com­plete, and com­pre­hen­sive. Based on a sys­tem­atic lit­er­a­ture review, a set of Infor­ma­tion Secu­rity prin­ci­ples is iden­ti­fied, defined and char­ac­ter­ized, which, sub­se­quently, leads to a pro­posal of a Revised Frame­work of Infor­ma­tion Secu­rity Prin­ci­ples. This frame­work was eval­u­ated in terms of com­plete­ness and whole­ness by inter­sect­ing it with a cat­a­log of threats, which resulted from the merger of four exist­ing cat­a­logs. An ini­tial set of secu­rity met­rics, applied directly to the prin­ci­ples that con­sti­tute the frame­work, is also sug­gested, allow­ing, in case of adverse events, to assess the extent to which each prin­ci­ple was com­pro­mised and to eval­u­ate the global effec­tive­ness of the infor­ma­tion pro­tec­tion efforts.

1. Introduction

The gen­er­al­iza­tion of Infor­ma­tion Sys­tems (IS) to all areas of soci­ety, cou­pled with the con­stant evo­lu­tion of Infor­ma­tion Tech­nol­ogy (IT), con­fig­ures an ecosys­tem where it is rel­a­tively cheap and easy to store, process, and share infor­ma­tion. This ecosys­tem presents sev­eral oppor­tu­ni­ties for orga­ni­za­tions and indi­vid­u­als, pro­foundly chang­ing the way they com­mu­ni­cate, orga­nize, and inter­act. The same ecosys­tem, how­ever, raises a host of risks to the infor­ma­tion manip­u­la­tion activ­i­ties, jus­ti­fy­ing con­cerns and invest­ments in the pro­tec­tion of infor­ma­tion and related resources.

Tra­di­tion­ally, infor­ma­tion and IS pro­tec­tion has been guided by three basic prin­ci­ples: Con­fi­den­tial­ity, Integrity and Avail­abil­ity, often referred to as the CIA model, with the acronym cap­tur­ing the first let­ter of each of the three prin­ci­ples. How­ever, some authors argue that these prin­ci­ples, although basi­lar and impor­tant, may not be suf­fi­cient since they do not address all Infor­ma­tion Secu­rity (InfoSec) threats and have not evolved at the same pace of the threats. As Parker [1998, p. 211] noted fif­teen years ago, We need a new model to replace the cur­rent inar­tic­u­late, incom­plete, and incor­rect descrip­tions of infor­ma­tion secu­rity. The cur­rent mod­els limit the scope of infor­ma­tion secu­rity mostly to com­puter tech­nol­ogy and ignore the sources of the prob­lems that infor­ma­tion secu­rity addresses. They also employ incor­rect mean­ings for the words they use and do not include some of the impor­tant types of loss such as steal­ing copies of infor­ma­tion and rep­re­sent­ing infor­ma­tion.

Over the years, the dis­sat­is­fac­tion with the CIA model led sev­eral authors to rede­fine exist­ing prin­ci­ples or to pro­pose new prin­ci­ples that com­ple­ment and update the tra­di­tional ones. Among those authors are Parker [1998], Dhillon and Back­house [2000], Stamp [2006], and Whit­man and Mat­tord [2011].

We use the expres­sion infor­ma­tion secu­rity prin­ci­ples to mean those attrib­utes of infor­ma­tion and other IS resources that may work as guide­lines, goals, or focal points for the infor­ma­tion pro­tec­tion efforts. The impor­tance of the prin­ci­ples is that by iden­ti­fy­ing them we are actu­ally defin­ing infor­ma­tion secu­rity, deduc­ing from them InfoSec objec­tives, con­cerns, and scope. The accep­tance of the foun­da­tional or onto­log­i­cal role of the prin­ci­ples for the activ­ity of infor­ma­tion secu­rity man­age­ment pred­i­cates that it is impor­tant to base the imple­men­ta­tion of InfoSec con­trols on a firm, com­plete and updated set of InfoSec prin­ci­ples.

In this work we pro­pose a revised frame­work of infor­ma­tion secu­rity prin­ci­ples. Prior to the pro­posal, we under­take a review of the lit­er­a­ture on infor­ma­tion secu­rity prin­ci­ples, fol­lowed by an exer­cise where we relate InfoSec threats to the prin­ci­ples. The revised frame­work is com­posed of a set of def­i­n­i­tions and a schematic struc­ture for the orga­ni­za­tion of the prin­ci­ples. In the end, we sug­gest an ini­tial set of met­rics to eval­u­ate the extent of InfoSec prin­ci­ples com­pro­mise.

2. Analysis of Literature on Information Security Principles

The first step we took towards the revised frame­work of InfoSec prin­ci­ples was the iden­ti­fi­ca­tion and char­ac­ter­i­za­tion of the attrib­utes lit­er­a­ture indi­cates explic­itly or implic­itly as infor­ma­tion secu­rity prin­ci­ples.

The nodal point for this review was the set of def­i­n­i­tions found in the lit­er­a­ture for those attrib­utes. From an oper­a­tional point of view, we used three main sources of def­i­n­i­tions. The first source was dic­tio­nar­ies where we sought the def­i­n­i­tion for the piv­otal word of each con­cept reviewed (two dic­tio­nar­ies were con­sulted: a dic­tio­nary of the Por­tuguese lan­guage and a dic­tio­nary of the Eng­lish lan­guage, namely the dic­tio­nary of the Acad­emy of Sci­ences of Lis­bon and the Mer­riam-Web­ster dic­tio­nary). The sec­ond source of def­i­n­i­tions were pub­li­ca­tions by inter­na­tional orga­ni­za­tions in the field of InfoSec, such as stan­dards pub­lished by the Inter­na­tional Orga­ni­za­tion for Stan­dard­iza­tion (ISO) and National Insti­tute of Stan­dards and Tech­nol­ogy (NIST), as well as his­tor­i­cally rel­e­vant ref­er­ences, e.g. Infor­ma­tion Tech­nol­ogy Secu­rity Eval­u­a­tion Cri­te­ria (ITSEC), Gen­er­ally Accepted Infor­ma­tion Secu­rity Prin­ci­ples (GAISP) and Con­trol Objec­tives for Infor­ma­tion and related Tech­nol­ogy (COBIT). The third source of def­i­n­i­tions were doc­u­ments were indi­vid­ual authors delin­eated their under­stand­ing of InfoSec prin­ci­ples. From these sources, a total of sev­en­teen secu­rity attrib­utes were iden­ti­fied.

In order to com­pare the def­i­n­i­tions we estab­lished a schema com­posed of four para­me­ters: scope where the def­i­n­i­tion was pre­sented, i.e., the domain or theme of the work reviewed; nature assigned to the prin­ci­ple defined; object focused in the def­i­n­i­tion; and the pur­pose assigned to the prin­ci­ple.

To avoid a tedious recita­tion of def­i­n­i­tions, we chose to present the reviewed def­i­n­i­tions in tab­u­lar form. For each of the con­cepts dis­cussed we con­densed the def­i­n­i­tions in a table that iden­ti­fies the pro­po­nents of the def­i­n­i­tion and its scope, nature, object, and pur­pose. For those cases where it was not pos­si­ble to fill all the fields, we marked the miss­ing val­ues as n/a (not avail­able).

These tables pro­vide a gen­eral and imme­di­ate overview of the terms and expres­sions used, facil­i­tat­ing the iden­ti­fi­ca­tion of com­mon and diver­gent points between the authors, as well as to ver­ify if there is a com­mon sense among the def­i­n­i­tions advanced for the same con­cept.

2.1 The Triad

The first group of con­cepts reviewed com­poses the tra­di­tional triad of InfoSec CIA (Con­fi­den­tial­ity, Integrity, and Avail­abil­ity). For a long time these three prin­ci­ples formed the fun­da­men­tal model on which InfoSec rested.

Confidentiality

Con­fi­den­tial­ity is one of three basic and tra­di­tional InfoSec prin­ci­ples and, prob­a­bly, the one that is most eas­ily and fre­quently asso­ci­ated with secu­rity. His­tor­i­cally, it has mil­i­tary roots and it was the first prin­ci­ple for­mal­ized in a sem­i­nal InfoSec doc­u­ment the TCSEC (Trusted Com­puter Sys­tem Eval­u­a­tion Cri­te­ria), based on the Bell-LaPadula lat­tice model.

Table 1 sum­ma­rizes the review made on the con­cept of con­fi­den­tial­ity.

Table 1: Sum­mary of Con­fi­den­tial­ity Def­i­n­i­tions

Table 1

This is the prin­ci­ple that deals with secrecy, cov­er­ing infor­ma­tion in stor­age, dur­ing pro­cess­ing, and while in tran­sit.

In terms of scope, more than half of the def­i­n­i­tions for con­fi­den­tial­ity are pre­sented in the con­text of InfoSec, with an empha­sis on the man­age­ment and gov­er­nance of InfoSec and on the prin­ci­ples and guide­lines of InfoSec.

Regard­ing the nature of the con­cept, there is no clear trend, although goal, char­ac­ter­is­tic, and prin­ci­ple stand out.

In what con­cerns the objects tar­geted by the def­i­n­i­tions there is, unsur­pris­ingly, the pre­dom­i­nance of infor­ma­tion, fol­lowed by data and sys­tems.

The pur­pose of con­fi­den­tial­ity is, accord­ing to most authors, to not dis­close, to not make avail­able and to not allow access to infor­ma­tion by unau­tho­rized enti­ties or peo­ple or in unau­tho­rized ways. Authors like Parker [1998] and Pfleeger and Pfleeger [2003] have called atten­tion for the fact that con­fi­den­tial­ity should not have as sole con­cern the dis­clo­sure of (secret) infor­ma­tion, but also vol­un­tary or invol­un­tary obser­va­tion, print­ing or sim­ply know­ing that a par­tic­u­lar infor­ma­tion asset exists. Under­ly­ing these con­cerns is the theme of access to infor­ma­tion and its con­trol, which is strongly related to the autho­riza­tion process. An addi­tional obser­va­tion is that con­fi­den­tial­ity may be tem­po­rary [ISSA 2004], mean­ing that infor­ma­tion may be clas­si­fied as con­fi­den­tial for a spe­cific period of time.

In gen­eral it can be con­cluded that, based on the reviewed def­i­n­i­tions, there is con­sen­sus among the authors, but with­out any con­cep­tion stand­ing out and whose def­i­n­i­tion is adopted gen­er­ally.

Integrity

The prin­ci­ple of integrity is also part of the tra­di­tional model of InfoSec. For the gen­eral pub­lic, how­ever, it is prob­a­bly less notice­able than con­fi­den­tial­ity. It fol­lowed con­fi­den­tial­ity in terms of for­mal­iza­tion, con­veyed in the Biba model.

Table 2 sum­ma­rizes the review made on the con­cept of integrity.

Table 2: Sum­mary of Integrity Def­i­n­i­tions

Table 2

Under­ly­ing the con­cept of integrity is the notion of change, with the major­ity of def­i­n­i­tions address­ing the access to infor­ma­tion resources and the asso­ci­ated autho­riza­tion process.

In terms of scope, the analy­sis is sim­i­lar to the prin­ci­ple of con­fi­den­tial­ity, since most def­i­n­i­tions are pro­posed in the domain of InfoSec.

Regard­ing the nature of the con­cept, there is also a sim­i­lar­ity to the prin­ci­ple of con­fi­den­tial­ity, to the extent that there is no pre­dom­i­nant term, stand­ing out prin­ci­ple, objec­tive, and char­ac­ter­is­tic.

In what con­cerns the objects tar­geted by the def­i­n­i­tions, there is a focus on infor­ma­tion, data and sys­tems. It is also impor­tant to note that ISO stan­dards are focused on orga­ni­za­tions assets, i.e., any­thing that has value for the orga­ni­za­tion, be it tan­gi­ble or intan­gi­ble. In con­trast to con­fi­den­tial­ity, there are authors who put under the umbrella of integrity both infor­ma­tion and sys­tems. The inclu­sion of sys­tems is jus­ti­fied by the fact that unau­tho­rized mod­i­fi­ca­tions or faults may also tar­get sys­tems, which may pro­voke unau­tho­rized mod­i­fi­ca­tions in infor­ma­tion, e.g. the case of processed data pro­duced by an ill-con­ceived com­puter pro­gram.

Con­sid­er­ing the pur­pose of integrity, there are two major dis­tinct streams. On the one hand, the prin­ci­ple of integrity ensures that infor­ma­tion is com­plete, accu­rate, and cor­rect. On the other hand, integrity refers to the pre­ven­tion of unau­tho­rized manip­u­la­tion, mod­i­fi­ca­tion, and destruc­tion. In this sense we can­not say there is con­sen­sus among the def­i­n­i­tions, although it is under­stood that the two streams are com­ple­men­tary.

Evi­dence of alter­ation of infor­ma­tion raises some chal­lenges, since it may not be pos­si­ble to com­pare the cur­rent state of infor­ma­tion with its orig­i­nal state. Parker [1998] argues that instead of con­sid­er­ing the orig­i­nal state, we should take into account the pre­vi­ous state of infor­ma­tion, although this may prove dif­fi­cult for com­puted data or for infor­ma­tion with­out source doc­u­ments.

A rel­e­vant issue asso­ci­ated with integrity is its con­tex­tual and behav­ioral dimen­sion. The def­i­n­i­tion pro­vided by ITGI [2007] stresses that the accu­racy, com­plete­ness and valid­ity of infor­ma­tion should be judged accord­ing to the busi­ness val­ues and expec­ta­tions. The impact of the inter­pre­ta­tion of infor­ma­tion by peo­ple on integrity had already been under­lined by Dhillon and Back­house [2000], who argued for the need of users to have the capa­bil­ity to inter­pret infor­ma­tion accord­ing to the busi­ness rules of the orga­ni­za­tion they per­tain to. If there is a deficit of this capa­bil­ity, even if the val­ues of data, signs and sym­bols are main­tained, the integrity of infor­ma­tion will be com­pro­mised.

Availability

Avail­abil­ity is the third com­po­nent of the tra­di­tional model for InfoSec. Prob­a­bly, it is the prin­ci­ple whose com­pro­mise is most imme­di­ately evi­dent for users.

Table 3 sum­ma­rizes the review made on the con­cept of integrity.

Table 3: Sum­mary of Avail­abil­ity Def­i­n­i­tions

Table 3

Regard­ing the scope and nature of avail­abil­ity, the con­clu­sion to be drawn is sim­i­lar to the one made for the prin­ci­ples of con­fi­den­tial­ity and integrity, i.e., avail­abil­ity is pro­posed mainly on stud­ies related to InfoSec, being named as a prin­ci­ple, char­ac­ter­is­tic and objec­tive, or abil­ity when def­i­n­i­tions focus on sys­tems. From the point of view of the object it stands out infor­ma­tion and sys­tems.

In what con­cerns the pur­pose, it is rel­e­vant to dis­tin­guish between def­i­n­i­tions with the focus on infor­ma­tion and on sys­tems, although their mean­ing is very close. An impor­tant issue relates to the access and pos­si­bil­ity of using infor­ma­tion and sys­tems in a timely man­ner and, for sys­tems, per­form­ing their func­tion in a given time. How­ever, and in con­trast to the pre­vi­ous two prin­ci­ples, the con­cern with access is not uniquely placed on its restric­tion (as before, only autho­rized agents should be grant access to the infor­ma­tion), but the main con­cern is to being able to grant access to those enti­tled to it. Although access to infor­ma­tion is impor­tant, the def­i­n­i­tion advanced by the ISO stan­dards sug­gests that being able to access infor­ma­tion does not imply that the infor­ma­tion is usable. The usabil­ity of infor­ma­tion had already been remarked by Parker [1998] and elab­o­rated by Whit­man and Mat­tord, who qual­ify access in terms of autho­riza­tion, for­mat, and obstruc­tion.

A final note regards the tim­ing issue pointed by Posthu­mus and von Solms [2004]. Not only there may be a right moment for mak­ing infor­ma­tion avail­able, but also avail­abil­ity is always tested at a spe­cific moment of time.

2.2 Extensions of the Triad

After review­ing the CIA model, we now present def­i­n­i­tions for a set of ten attrib­utes that do not make part of the InfoSec tra­di­tional frame­work, but may func­tion as exten­sions of the CIA triad.

The iden­ti­fi­ca­tion of these attrib­utes resulted from the ini­tial search that led to the find­ing of works that, besides the CIA model, refer to other attrib­utes that can also be con­sid­ered as InfoSec prin­ci­ples, tak­ing into account their def­i­n­i­tions and char­ac­ter­is­tics. The same sys­tem­atic review process was applied to this set of attrib­utes. The analy­sis of the cor­re­spond­ing def­i­n­i­tions fol­lowed the same pro­ce­dure pre­vi­ously employed with the dif­fer­ence that, in most cases, the num­ber of def­i­n­i­tions is much more reduced.

The ten attrib­utes that will be dis­cussed next are pri­vacy, reli­a­bil­ity, authen­tic­ity, non-repu­di­a­tion, account­abil­ity, safety, sur­viv­abil­ity, util­ity, accu­racy, and pos­ses­sion.

Privacy

Pri­vacy is one of the most dis­cussed top­ics in the field of InfoSec [Whit­man and Mat­tord 2011] and prob­a­bly one of the most eas­ily under­stood by soci­ety in gen­eral. This is a con­cept closely related to the con­cept of con­fi­den­tial­ity. Table 4 sum­ma­rizes the review made on the con­cept of pri­vacy.

Table 4: Sum­mary of Pri­vacy Def­i­n­i­tions

Table 4

The scope of the def­i­n­i­tions, with the exclu­sion of those found in the dic­tio­nar­ies, is always pre­sented in the con­text of pri­vacy itself.

Con­cern­ing the def­i­n­i­tions nature, pri­vacy is referred to as a state, prin­ci­ple and even as a claim.

The object of the reviewed def­i­n­i­tions is infor­ma­tion, although this results directly from the focus of the lit­er­a­ture search. How­ever, the ACL [2001] def­i­n­i­tion sug­gests that the mean­ing of the word is related to peo­ple and not to infor­ma­tion. A sim­i­lar view had already been advanced by Parker [1998, p. 227], who argued that pri­vacy refers to a human and con­sti­tu­tional right or free­dom, pre­fer­ring to rea­son in terms of con­fi­den­tial­ity of infor­ma­tion in order to pro­tect the pri­vacy of peo­ple. Indeed, Prosser [1960] defined the pri­vacy rights of an indi­vid­ual as oppo­si­tion to sev­eral actions, includ­ing the pub­lic dis­clo­sure of embar­rass­ing pri­vate facts about an indi­vid­ual. Over time, the use of the word pri­vacy has expanded to encom­pass other objects, as can be noticed in the con­cep­tu­al­iza­tion of Clarke [2006] who iden­ti­fies the pri­vacy of the per­son, the pri­vacy of per­sonal behav­ior, the pri­vacy of per­sonal com­mu­ni­ca­tions, and the pri­vacy of per­sonal data.

Regard­ing the pur­pose, there are two dis­tinct senses. One has to do with con­trol of per­sonal infor­ma­tion held by third par­ties, includ­ing con­trol over the dis­clo­sure of that infor­ma­tion, and the other con­cerns non-intru­sion, anonymity, unlink­a­bil­ity, unde­tectabil­ity, and unob­serv­abil­ity. Pfitz­mann and Hansen [2010] have pro­posed and char­ac­ter­ized these last terms for a bet­ter under­stand­ing of the pri­vacy con­cept, help­ing to clar­ify the pri­vacy con­struct and to dis­tin­guish it from con­fi­den­tial­ity. The clar­i­fi­ca­tions advanced by those authors for the sup­port­ing con­cepts of pri­vacy are the fol­low­ing: anonymity a sub­ject is not iden­ti­fi­able within a set of sub­jects; unlink­a­bil­ity it is not pos­si­ble to suf­fi­ciently dis­tin­guish if an item of inter­est (sub­ject, mes­sage, or action) is linked to another item(s) of inter­est; unde­tectabil­ity it is not pos­si­ble to suf­fi­ciently dis­tin­guish if an item of inter­est exists or not; and unob­serv­abil­ity it refers to the anonymity of an item of inter­est or to the unde­tectabil­ity of a sub­ject.

In the case of pri­vacy, the owner of the infor­ma­tion is gen­er­ally the indi­vid­ual to whom the infor­ma­tion relates (the per­son in per­sonal infor­ma­tion). There may be other enti­ties that hold the infor­ma­tion, how­ever, they usu­ally act as cus­to­di­ans of that infor­ma­tion.

Reliability

The con­cept of reli­a­bil­ity is used in sev­eral con­texts with a spe­cial empha­sis on the domain of sys­tems depend­abil­ity. Table 5 sum­ma­rizes the review made on the con­cept of reli­a­bil­ity.

Table 5: Sum­mary of Reli­a­bil­ity Def­i­n­i­tions

Table 5

The prin­ci­ple of reli­a­bil­ity, unlikely the pre­vi­ous reviewed prin­ci­ples, is mainly pro­posed in the realm of sys­tems instead of infor­ma­tion. Both ITGI and ISO doc­u­ments do not asso­ciate reli­a­bil­ity directly to infor­ma­tion secu­rity.

Regard­ing the nature of the con­cept, in the per­spec­tive of sys­tems it is pre­sented as an attribute or abil­ity. In the per­spec­tive of infor­ma­tion it is pre­sented as a prop­erty.

The object that is mainly tar­geted by the def­i­n­i­tions is the sys­tem, with the pur­pose of the prin­ci­ple being the con­ti­nu­ity of ser­vice deliv­ery or the flaw­less func­tion of a sys­tem. Actu­ally, Trivedi et al. [2009] sug­gest that reli­a­bil­ity may be con­ceived as a mea­sure of the con­ti­nu­ity of ser­vice.

Authenticity

In con­trast to reli­a­bil­ity, the prin­ci­ple of authen­tic­ity is applic­a­ble to infor­ma­tion itself. It is also often applied to processes and peo­ple as it can be observed in Table 6.

Table 6: Sum­mary of Authen­tic­ity Def­i­n­i­tions

Table 6

This prin­ci­ple is pro­posed in the realms of InfoSec and depend­abil­ity. As regards the nature, the def­i­n­i­tions reviewed do not con­vey a clear ten­dency. In what con­cerns pur­pose, it is rel­e­vant to dis­tin­guish between the authen­tic­ity of users, i.e., the con­fir­ma­tion of their iden­tity, and the authen­tic­ity of infor­ma­tion in the sense of being gen­uine.

At a first glance, there is a cer­tain degree of over­lap between authen­tic­ity and integrity. Prob­a­bly, one of the main sup­port­ers of the dis­tinc­tion between the two prin­ci­ples is Parker [1998]. This author pro­posed a pair of prin­ci­ples formed by integrity and authen­tic­ity, in which the first relates to com­plete­ness, and the sec­ond to valid­ity. Accord­ing to his view, an entity is authen­tic if it rep­re­sents the desired facts and real­ity. To illus­trate the dif­fer­ences between the two prin­ci­ples, Parker exem­pli­fies with a sce­nario in which a soft­ware dis­trib­u­tor obtained a com­puter game pro­gram from an obscure pub­lisher. The dis­trib­u­tor mod­i­fied the name of the pub­lisher on the media and title screens to that of a well-known pub­lisher and then made copies of the media. With­out inform­ing either pub­lisher, the dis­trib­u­tor dis­sem­i­nated copies of the pro­gram in a for­eign coun­try. Parker observes that the pro­gram had integrity because it iden­ti­fied a pub­lisher and was com­plete and sound. How­ever, it was not an authen­tic game from the well-known pub­lisher, i.e., it did not con­form to real­ity since it mis­rep­re­sented the pub­lisher of the game.

Non-Repudiation

The prin­ci­ple of non-repu­di­a­tion shares some of the fea­tures of authen­tic­ity. Giv­ing the appar­ent over­lap between the con­cepts, Parker [1998] argues that non-repu­di­a­tion is con­tained and cov­ered by authen­tic­ity, since it is a form of mis­rep­re­sen­ta­tion by reject­ing infor­ma­tion that is actual valid, not includ­ing this prin­ci­ple in his InfoSec frame­work. A dif­fer­ent view was con­veyed by the USA Depart­ment of Defense, who added non-repu­di­a­tion to the tra­di­tional InfoSec model [DoD 2002]. There­fore, it is impor­tant to ana­lyze def­i­n­i­tions for non-repu­di­a­tion pro­posed in other con­texts. Table 7 sum­ma­rizes the review made on the con­cept of non-repu­di­a­tion.

Table 7: Sum­mary of Non-Repu­di­a­tion Def­i­n­i­tions

Table 7

Non-repu­di­a­tion is pro­posed mainly in InfoSec, although it was not pos­si­ble to iso­late a con­sen­sual nature for this prin­ci­ple. This prin­ci­ple plays a cen­tral role in com­mu­ni­ca­tions secu­rity, where it is impor­tant to prove that a mes­sage orig­i­nated from a spe­cific sender (non-repu­di­a­tion of ori­gin) and that a mes­sage was accepted by a spe­cific receiver (non-repu­di­a­tion of recep­tion).

The object focused by def­i­n­i­tions varies, with infor­ma­tion and actions or events receiv­ing a spe­cial accen­tu­a­tion.

Con­cern­ing the pur­pose, there is an empha­sis on the iden­ti­fi­ca­tion of a par­tic­u­lar entity and the unequiv­o­cal asso­ci­a­tion of that entity with an event or action. The essen­tial point of this con­cept rests on the capa­bil­ity to ensure that a cer­tain event did occur or did not occur and, in the first case, to be able to iden­tify the enti­ties involved. In other words, all actions rel­e­vant to InfoSec made in a sys­tem are known and can­not be denied or hid­den by their authors [Laz­za­roni et al. 2010].

A final obser­va­tion regards the impor­tance of the inverse of repu­di­a­tion. Besides hav­ing the abil­ity to demon­strate that a cer­tain agent has actu­ally made cer­tain trans­ac­tions even when the agent denies it, it is also impor­tant to be able to demon­strate that a cer­tain agent has not per­formed cer­tain actions even if the agent claims to have made such trans­ac­tions.

Accountability

Account­abil­ity is pre­sented by sev­eral authors as an InfoSec prin­ci­ple, despite not apply­ing directly to infor­ma­tion itself, but to the peo­ple that manip­u­late infor­ma­tion. In the def­i­n­i­tions ana­lyzed it is imme­di­ately rec­og­niz­able some par­al­lels between the def­i­n­i­tions of non-repu­di­a­tion and account­abil­ity, espe­cially in aspects related to the iden­tity or iden­ti­fi­ca­tion of peo­ple. Table 8 sum­ma­rizes the review made on the con­cept of account­abil­ity.

Table 8: Sum­mary of Account­abil­ity Def­i­n­i­tions

Table 8

This prin­ci­ple is mainly versed in stud­ies related to InfoSec and the users are its main object. The pur­pose of account­abil­ity shares sim­i­lar­i­ties with the pur­pose of non-repu­di­a­tion, in that both seek to attribute respon­si­bil­ity for events or actions to a given entity. From the def­i­n­i­tions reviewed, it was also pos­si­ble to ver­ify the con­nec­tion between account­abil­ity and the activ­i­ties of con­trol and audit­ing, lead­ing Whit­man and Mat­tord [2011] to observe that account­abil­ity is also known as auditabil­ity. Still, in the realm of depend­abil­ity, non-repu­di­a­tion is usu­ally applied to the trans­mis­sion of mes­sages, while account­abil­ity is applied to peo­ples iden­tity. The main con­cern, though, is to be able to assign and impute to a spe­cific entity the con­se­quences of a cer­tain action or deci­sion that was detri­men­tal to the secu­rity of IS [ISO 2009]. This assumes par­tic­u­lar rel­e­vance in the attri­bu­tion of blame and in the cases of dis­putes set­tled in court.

Safety

The con­cept of safety was found in the lit­er­a­ture on depend­abil­ity of sys­tems. In the Por­tuguese dic­tio­nary, safety is pre­sented as syn­onym of secu­rity [ACL 2001], cor­re­spond­ing to the absence of dan­ger. The mean­ing advanced by the Eng­lish dic­tio­nary is closer to the def­i­n­i­tions pro­posed by sys­tems depend­abil­ity researchers, as showed in Table 9.

Table 9: Sum­mary of Safety Def­i­n­i­tions

Table 9

In the def­i­n­i­tions ana­lyzed, safety is used in the realm of sys­tems, and there is sig­nif­i­cant con­sen­sus regard­ing the pur­pose of this prin­ci­ple, namely the absence of cat­a­strophic con­se­quences on the envi­ron­ment and peo­ple. In cen­ter­ing the con­cept on the effects of sys­tem mis­be­hav­ior, pro­po­nents clas­sify con­se­quences as cat­a­strophic, cir­cum­scrib­ing the con­cept to the sit­u­a­tions where out­comes exceed a cer­tain thresh­old (e.g., when human lives are in dan­ger or are lost). Essen­tially, it is an abil­ity or prop­erty of a sys­tem to not cause harm to envi­ron­ment and peo­ple.

Survivability

As in the case of other con­cepts already reviewed, sur­viv­abil­ity is pre­sented not as a prin­ci­ple applied directly to infor­ma­tion, but applied to sys­tems in gen­eral. Table 10 sum­ma­rizes the review made on the con­cept of sur­viv­abil­ity.

Table 10: Sum­mary of Sur­viv­abil­ity Def­i­n­i­tions

Table 10

As men­tioned, the object focused by the def­i­n­i­tions is the sys­tem (com­put­ers and net­works). Con­cern­ing the nature, sur­viv­abil­ity is usu­ally pre­sented as a sys­tems abil­ity, whose pur­pose is to main­tain oper­a­tion even in the pres­ence of fail­ures or attacks. This prin­ci­ple is inter­re­lated with the resilience trait of sys­tems, under­stood as the capa­bil­ity to respond to and recover quickly from cri­sis sit­u­a­tions. In order to sur­vive, a sys­tem needs to adapt to the chang­ing con­di­tions of its envi­ron­ment (attacks, fail­ures, and acci­dents) so that restora­tion of a min­i­mum level of ser­vice is attain­able.

Utility

In the field of InfoSec, the prin­ci­ple of util­ity appears in the frame­work pro­posed by Parker [1998]. Table 11 sum­ma­rizes the review made on the con­cept of util­ity.

Table 11: Sum­mary of Util­ity Def­i­n­i­tions

Table 11

Parker artic­u­lates the prin­ci­ple of util­ity with the prin­ci­ple of avail­abil­ity, relat­ing the first to use­ful­ness of infor­ma­tion and the sec­ond to usabil­ity of infor­ma­tion. To illus­trate the dif­fer­ence between those prin­ci­ples, Parker out­lines a sce­nario where an employee that rou­tinely encrypts the only copy of valu­able infor­ma­tion stored in his com­puter, acci­dently erases the encryp­tion key of the file. In this case, the avail­abil­ity of infor­ma­tion is main­tained, but its use­ful­ness is lost. In a way, Parker restricts avail­abil­ity to the preser­va­tion of access to infor­ma­tion, sep­a­rat­ing its access from its use. If a user accesses infor­ma­tion that is pre­sented in a lan­guage he does not under­stand we will have a com­pro­mise of util­ity, although the infor­ma­tion main­tains its avail­abil­ity.

By index­ing the util­ity of infor­ma­tion to a spe­cific pur­pose, Parker brings to the realm of InfoSec con­cerns about the degree of use­ful­ness of infor­ma­tion for the task users have in hand.

Accuracy

The con­cept of accu­racy is used in many var­i­ous con­texts. In the field of InfoSec it was intro­duced by Whit­man and Mat­tord [2011] in their expanded model of crit­i­cal infor­ma­tion char­ac­ter­is­tics. Table 12 sum­ma­rizes the review made on this con­cept.

Table 12: Sum­mary of Accu­racy Def­i­n­i­tions

Table 12

Whit­man and Mat­tord [2011] com­bine in the prin­ci­ple of infor­ma­tion accu­racy the free­dom from mis­takes and errors and the exis­tence of the value that the end user expects, argu­ing that if infor­ma­tion has been mod­i­fied, inten­tion­ally or unin­ten­tion­ally, it is no longer accu­rate. At the first glance, this under­stand­ing approx­i­mates the def­i­n­i­tion of integrity. How­ever, if we con­sider the exam­ples pro­vided by those authors we are able to clar­ify the mean­ing of accu­racy. Whit­man and Mat­tord [2011] illus­trate the prin­ci­ple using a check­ing account exam­ple. An indi­vid­ual assumes that the infor­ma­tion con­tained in the check­ing account is an accu­rate rep­re­sen­ta­tion of his finances. Incor­rect infor­ma­tion in the check­ing account may result from exter­nal or inter­nal errors. A bank teller may mis­tak­enly add or sub­tract too much from the account, incor­rectly chang­ing the value of infor­ma­tion. The account holder may acci­dently enter an incor­rect amount in his account reg­is­ter. In con­trast to the integrity def­i­n­i­tions, both sit­u­a­tions described for the check­ing account exam­ple dif­fer from the fact that the agents (bank teller and account holder) are autho­rized to change the infor­ma­tion, how­ever, they mod­ify it to an incor­rect value.

Possession

The prin­ci­ple of pos­ses­sion of infor­ma­tion is part of the InfoSec expanded model pro­posed by Parker (1998]. Whit­man and Mat­tord [2011] have also included this prin­ci­ple in their InfoSec model. Table 13 sum­ma­rizes the review made on the con­cept of pos­ses­sion.

Table 13: Sum­mary of Pos­ses­sion Def­i­n­i­tions

Table 13

From these def­i­n­i­tions, it is stressed the aspect of infor­ma­tion con­trol. Nowa­days, it seems con­sen­su­ally accepted that own­er­ship and con­trol of infor­ma­tion are per­haps the most impor­tant sources of power in orga­ni­za­tions.

Parker [1998] jus­ti­fies the addi­tion of this prin­ci­ple to the tra­di­tional model so that InfoSec efforts may pre­vent cer­tain types of losses, such as theft of infor­ma­tion. The ratio­nal out­lined by that author is clar­i­fied by con­trast­ing between pos­ses­sion and con­fi­den­tial­ity. By def­i­n­i­tion, the prin­ci­ple of pos­ses­sion deals only with what peo­ple pos­sess and know, not what they pos­sess with­out know­ing. To illus­trate this dif­fer­ence, Parker describes a sce­nario where bur­glars broke into a com­puter cen­ter and stole media con­tain­ing the com­pa­nys com­puter mas­ter files and the asso­ci­ated backup copies of the files. The gang held the mate­ri­als for ran­som. Con­fi­den­tial­ity was not an issue because bur­glars had no rea­son to read or dis­close the infor­ma­tion con­tained in the files. The com­pany lost pos­ses­sion of the files (avail­abil­ity was delayed, but the firm could retrieve the infor­ma­tion at any time by pay­ing the ran­som). Increas­ingly, we own sig­nif­i­cant infor­ma­tion that we, but that we own, such as object files.

Since it is usu­ally extremely sim­ple and cheap to pro­duce addi­tional copies of infor­ma­tion, we may have dif­fer­ent degrees of con­trol regard­ing the infor­ma­tion we own. This will be the case of hav­ing exclu­sive or shared pos­ses­sion of infor­ma­tion, as well as being able to regain own­er­ship after a tem­po­rary loss of pos­ses­sion.

2.3 Complements to the Triad

In this sec­tion we review four prin­ci­ples pro­posed by Dhillon and Back­house [2000] that con­sti­tute a clear depar­ture from the CIA triad. Instead of restrict­ing their atten­tion to infor­ma­tion stored or in tran­sit in sil­i­con pro­ces­sors (com­put­ers and net­works), those authors focused the secu­rity chal­lenges placed and faced by bio­log­i­cal pro­ces­sors of infor­ma­tion (peo­ple). Con­se­quently, this group of prin­ci­ples mainly focuses on the con­duct and behav­ior of peo­ple in an orga­ni­za­tional con­text that may have impact in the integrity of the orga­ni­za­tion as a whole, or in the secu­rity of infor­ma­tion manip­u­la­tion activ­i­ties in par­tic­u­lar. Thus, the four prin­ci­ples form a com­ple­ment to the CIA triad instead of just extend­ing the tra­di­tional model of InfoSec.

The four prin­ci­ples are known by the acronym RITE (Respon­si­bil­ity, Integrity, Trust, Eth­i­cal­ity) and they are defined below.

Respon­si­bil­ity—Con­trast­ing to account­abil­ity, this prin­ci­ple refers to the respon­si­bil­ity that each mem­ber of an orga­ni­za­tion should observe when per­form­ing its func­tion. With the dis­ap­pear­ance of ver­ti­cal man­age­ment struc­tures, a clear per­cep­tion of per­sonal respon­si­bil­ity and know­ing what roles to play within the orga­ni­za­tion become increas­ingly impor­tant. The rel­e­vance of respon­si­bil­ity is more acute when new cir­cum­stances arise in the orga­ni­za­tion and it becomes nec­es­sary for some­one to vol­un­tar­ily assume the respon­si­bil­ity (even if it has not been assigned) to deal with these same cir­cum­stances. Oth­er­wise, by not assum­ing the new respon­si­bil­ity the level of risk of the infor­ma­tion sys­tem may increase and the integrity of the orga­ni­za­tion may be in jeop­ardy.

Integrity—In todays orga­ni­za­tions infor­ma­tion is one of the most valu­able asset, how­ever, it is an asset that by its nature may be eas­ily divulged to unau­tho­rized par­ties. In this respect per­sonal integrity is of par­tic­u­lar impor­tance. Nev­er­the­less, orga­ni­za­tions do not always check the ref­er­ences of their future employ­ees before grant­ing them access to sen­si­tive infor­ma­tion, and even if they check, there is no war­ranty that a per­son main­tains its integrity for­ever. Although the des­ig­na­tion of this prin­ci­ple is the same as the one that com­poses the CIA triad, in this con­text integrity is con­nected to the loy­alty of the mem­bers of the orga­ni­za­tion.

Trust—Trust, as opposed to exter­nal con­trol, is of par­tic­u­lar impor­tance in orga­ni­za­tions geo­graph­i­cally dif­fuse where mem­bers can­not con­trol each other andphys­i­cally super­vi­sion” is not an option. In this con­text it is expected that each mem­ber acts accord­ing to the norms and stan­dards of behav­ior accepted and imple­mented by the orga­ni­za­tion, regard­less of the dis­tance that lies between the mem­ber and the orga­ni­za­tion phys­i­cal core.

Eth­i­cal­ity—This prin­ci­ple advo­cates that mem­bers of an orga­ni­za­tion should adopt eth­i­cal behav­iors even if these are not for­mally defined and imple­mented. It essen­tially deals with the infor­mal rela­tion­ships that are estab­lished within the orga­ni­za­tion and with behav­ior in the face of new sit­u­a­tions for which there are sim­ply no pre-defined rules on how to act or inter­pret.

2.4 Remarks on the Review

In this sec­tion we present a set of remarks on the lit­er­a­ture reviewed and on the InfoSec prin­ci­ples that were iden­ti­fied and dis­cussed.

The first remark is that InfoSec, cur­rently, may still develop largely around the tra­di­tional prin­ci­ples of con­fi­den­tial­ity, integrity, and avail­abil­ity. This sit­u­a­tion is of par­tic­u­lar con­cern when inter­na­tional stan­dards, such as ISO/IEC 27000 fam­ily, espe­cially ISO/IEC 27001 due to its role in cer­ti­fi­ca­tion, do not yet incor­po­rate in their con­tent explicit and accu­rate ref­er­ences to and con­cerns with addi­tional InfoSec prin­ci­ples that are fun­da­men­tal for deal­ing with the grow­ing com­plex­ity of threats to the secu­rity of orga­ni­za­tions infor­ma­tion assets.

As it can be seen in the review pre­sented, it is rel­a­tively easy to find sci­en­tific lit­er­a­ture on the tra­di­tional prin­ci­ples of InfoSec, con­trary to what is the case for other InfoSec prin­ci­ples.

Addi­tion­ally, there were few authors who sought to expand the tra­di­tional CIA model, pro­pos­ing new mod­els or frame­works that include addi­tional prin­ci­ples of InfoSec, with the works by Parker [1998], Dhillon and Back­house [2000] and Whit­man and Mat­tord [2011] being excep­tions.

Regard­ing the prin­ci­ples of InfoSec that were iden­ti­fied, from the point of view of the focused object, they can be grouped into three major classes: focus on infor­ma­tion, focus on sys­tems, and focus on peo­ple.

In an attempt to syn­the­size the lit­er­a­ture reviewed, we present in Fig­ure 1 the results of cross­ing two of the ele­ments ana­lyzed in the def­i­n­i­tions that we con­sid­ered impor­tant to char­ac­ter­ize and to dis­tin­guish each of the prin­ci­ples: the pur­pose and the object focused. Based on this dia­gram one can check the posi­tion­ing of the prin­ci­ples in rela­tion to those two axes, as well as the inter­sec­tions or prox­im­ity that exist between prin­ci­ples.

Figure 1

Fig­ure 1: Sum­mary of Lit­er­a­ture Reviewed

3. Relating InfoSec Threats to InfoSec Principles

In the field of InfoSec threats are a fun­da­men­tal con­cept. Threats feed risk analy­sis exer­cises, con­cen­trat­ing the atten­tion of secu­rity man­agers who project and imple­ment con­trols aimed to mit­i­gate the effects of threats turned into attacks. One way to con­nect threats and their poten­tial impacts on infor­ma­tion sys­tems assets is by con­sid­er­ing what InfoSec prin­ci­ples may be in jeop­ardy if threats mate­ri­al­ize.

In order to get a bet­ter grasp of the InfoSec prin­ci­ples reviewed, and thus tak­ing a fur­ther step towards the pro­posal of a revised frame­work of InfoSec prin­ci­ples, we crossed the prin­ci­ples to a bat­tery of InfoSec threats. This exer­cise served as a test on the scope and com­plete­ness of the list of prin­ci­ples.

To oper­a­tional­ize this test we needed to instan­ti­ate the prin­ci­ples to a suf­fi­ciently broad and rep­re­sen­ta­tive set of InfoSec threats. To this end we con­densed a Uni­fied Threat Cat­a­log (UTC) that resulted from the fusion of four dis­tinct threat and attack cat­a­logs fea­tured in Table 14.

Table 14: InfoSec Threat and Attack Cat­a­logs

Table 14

The option for con­dens­ing the four cat­a­logs in a uni­fied view, instead of sim­ply apply­ing one of the cat­a­logs, was taken since we con­cluded that per se the orig­i­nal cat­a­logs con­tained too gen­eral or too tech­ni­cal and detailed InfoSec threats and attacks. This would hin­der the process of relat­ing threats and attacks to the pre­vi­ously iden­ti­fied InfoSec prin­ci­ples. It was also con­sid­ered that none of the cat­a­logs alone was suf­fi­ciently com­plete and com­pre­hen­sive to carry out the intended exer­cise. Thus, we decided to pre­pare a new cat­a­log which resulted from the merger of the four cat­a­logs men­tioned above, tak­ing as base the cat­a­log issued by the Ger­man Fed­eral Office for Infor­ma­tion Secu­rity. Of the four cat­a­logs, that one was con­sid­ered the most com­pre­hen­sive and com­plete and the one where, despite some excep­tions, the threats and their respec­tive descrip­tions are rel­a­tively straight­for­ward to under­stand, not being too gen­eral or too detailed and tech­ni­cal.

The pro­ce­dure for cre­at­ing the UTC con­sisted of the fol­low­ing steps:

At the end we obtained a cat­a­log com­posed of 422 threats orga­nized into five cat­e­gories and 32 sub­cat­e­gories, as pre­sented in Table 15.

Table 15: Struc­ture of the Uni­fied Threat Cat­a­log

Table 15

The main cat­e­gories were taken directly from the Ger­man cat­a­log. The sub­cat­e­gories were intro­duced to allow a more logic orga­ni­za­tion of sim­i­lar threats (from the point of view of its con­se­quences on InfoSec prin­ci­ples) that were scat­tered and could com­pli­cate the analy­sis of the cat­a­log.

The process of relat­ing threats to prin­ci­ples con­sisted of, for each of the threats, iden­tify the poten­tially affected prin­ci­ples. The inter­sec­tion of the threats with the prin­ci­ples was based on the strict inter­pre­ta­tion of the descrip­tion of the threat in order to try to reduce the degree of sub­jec­tiv­ity in the eval­u­a­tion. In the case of threats with too gen­eral descrip­tions, whose attri­bu­tion to spe­cific InfoSec prin­ci­ples became unfea­si­ble, we chose not to con­sider them, mark­ing those threats as generic/gen­eral.

The out­comes of under­tak­ing the process led to the for­mu­la­tion of four propo­si­tions.

Reinterpretation of Survivability Principle

Con­sid­er­ing the def­i­n­i­tions for the sur­viv­abil­ity prin­ci­ple, and tak­ing into account that none of the UTC threats matched this prin­ci­ple, we rein­ter­preted this prin­ci­ple as a con­trib­u­tor to avail­abil­ity. In this view, sur­viv­abil­ity is per­ceived as an abil­ity of a sys­tem to endure severe sit­u­a­tions. The abil­ity to with­stand seri­ous attacks and to be tol­er­ant to fail­ures is par­tic­u­larly impor­tant for sys­tems com­pris­ing national crit­i­cal infor­ma­tion infra­struc­tures and it has attracted much atten­tion in the areas of cyber defense and SCADA (Super­vi­sory Con­trol and Data Acqui­si­tion) sys­tems secu­rity.
Dis­card of Safety Prin­ci­ple
As defined by sev­eral authors, safety means the absence of cat­a­strophic con­se­quences on the envi­ron­ment caused by a given sys­tem. Safety may be con­sid­ered as a gen­eral prin­ci­ple related to adverse sit­u­a­tions, since it aims to pre­serve the envi­ron­ment out­side a given sys­tem, infor­ma­tion sys­tems included. Fur­ther­more, it was not pos­si­ble to attribute any of the UTC threats to safety.

Proposal of Legality Principle

Dur­ing the cross­ing process, it was found that some threats regard­ing legal con­se­quences for the orga­ni­za­tion, and that could jeop­ar­dize InfoSec, could not be matched to any of the prin­ci­ples iden­ti­fied. Thereby, we pro­pose the inclu­sion of a new prin­ci­ple, named legal­ity, in order to address those par­tic­u­lar threats.

We con­sider this prin­ci­ple par­tic­u­larly rel­e­vant in the present times, given the increased need for InfoSec pro­fes­sion­als to ensure the com­pli­ance of infor­ma­tion sys­tems con­trols with sev­eral reg­u­la­tory pieces [Berghel 2005].

Maintenance of RITE Principles

Ana­lyz­ing the results of the cross­ing process, one notes that RITE prin­ci­ples have a reduced expres­sion in terms of the num­ber of threats that may impact those prin­ci­ples. At first, one could be led to dis­card these prin­ci­ples, but a finer con­sid­er­a­tion of the con­tents of UTC may sug­gest a dif­fer­ent alter­na­tive. Indeed, the UTC resulted from the fusion of four cat­a­logs, so it shares the qual­i­ties and short­com­ings of those under­ly­ing cat­a­logs.

We argue that one of the short­com­ings of the UTC is its adher­ence to the tra­di­tional InfoSec prin­ci­ples, namely the CIA triad, leav­ing out other poten­tial prin­ci­ples, espe­cially those that com­ple­ment con­fi­den­tial­ity, integrity and avail­abil­ity, as is the case of RITE prin­ci­ples. Indeed, UTC and the base cat­a­logs may suf­fer from a too restric­tive focus on infor­ma­tion stored in and in tran­sit between IT sys­tems. As Dhillon and Back­house [2000] have noted, The tra­di­tional infor­ma­tion secu­rity prin­ci­ples of con­fi­den­tial­ity, integrity and avail­abil­ity are fine as far as they go, but they are very restricted. They apply most obvi­ously to infor­ma­tion seen as data held on com­puter sys­tems where con­fi­den­tial­ity is the pre­ven­tion of unau­tho­rized dis­clo­sure, integrity the pre­ven­tion of the unau­tho­rized mod­i­fi­ca­tion, and avail­abil­ity the pre­ven­tion of unau­tho­rized with­hold­ing of data or resources. As the authors con­clude, it is a com­mon con­cep­tion to apply the CIA triad at the tech­ni­cal level, but it is the human and social con­text that deter­mines the suc­cess of InfoSec tech­ni­cal con­trols.

In order to find a more robust sup­port for our claim regard­ing the UTC (and the orig­i­na­tor cat­a­logs), we under­took an addi­tional analy­sis of the UTC, this time by relat­ing its threats to the ele­ments of Alters [1999, 2008] Work Sys­tem Model (WSM). The goal of this process was to eval­u­ate the degree of cov­er­age of UTCs set of threats.

Accord­ing to Alter [2008, p. 451], a work sys­tem is a sys­tem in which human par­tic­i­pants and/or machines per­form work (processes and activ­i­ties) using infor­ma­tion, tech­nol­ogy, and other resources to pro­duce spe­cific prod­ucts and/or ser­vices for spe­cific inter­nal or exter­nal cus­tomers. Alter views infor­ma­tion sys­tems as work sys­tems, whose processes and activ­i­ties are ded­i­cated to cap­tur­ing, trans­mit­ting, stor­ing, retriev­ing, pro­cess­ing, and dis­play­ing infor­ma­tion. In Fig­ure 2 we present the archi­tec­ture of the WSM.

Figure 2

Fig­ure 2: Work Sys­tem Model [Alter 2008, p. 461]

The process that we under­took to relate UTC and WSM was sim­i­lar to the one applied to the cross­ing of UTC and InfoSec prin­ci­ples: based on the descrip­tion of each threat, we deter­mined which ele­ments of the WSM would suf­fer the con­se­quences of the threat. Fig­ure 3 sum­ma­rizes the results of this process of relat­ing UTC and WSM. The three most affected ele­ments of the WSM are Infor­ma­tion, Tech­nolo­gies, and Processes and Activ­i­ties.

Figure 3

Fig­ure 3: Cor­re­spon­dence between UTC and WSM

It is evi­dent that UTC threats do not uni­formly cover all WSM ele­ments, being con­cen­trated on infor­ma­tional, tech­no­log­i­cal, and func­tional aspects. There­fore, we argue there is a need to assess threats that may have an impact on ele­ments such as Strat­egy, Envi­ron­ment, and Prod­ucts and Ser­vices, but also to focus on infor­ma­tion stored, com­mu­ni­cated, and manip­u­lated by bio­log­i­cal pro­ces­sors, which are encap­su­lated in WSM Cus­tomers and Par­tic­i­pants ele­ments. Accord­ing to WSM def­i­n­i­tions, cus­tomers are peo­ple who ben­e­fit directly from the prod­ucts and ser­vices pro­duced by the work sys­tem. Par­tic­i­pants include peo­ple who per­form work in the busi­ness process, which may use IT exten­sively or resid­u­ally, or that do not use IT at all.

It should be men­tioned, how­ever, that, as in the case of relat­ing UTC threats to InfoSec prin­ci­ples, an effort was made to inter­pret nar­rowly and objec­tively the descrip­tion of each of the threats, which helps to explain why WSM ele­ments such as Prod­ucts and Ser­vices, Envi­ron­ment, and Cus­tomers present very small val­ues in what con­cerns the impacts of UTC threats. In a broader inter­pre­ta­tion it would be log­i­cal to assume that threats that have an impact on Infor­ma­tion, Tech­nolo­gies, and Processes and Activ­i­ties may have direct con­se­quences, for exam­ple, on Prod­ucts and Ser­vices, and Cus­tomers in an orga­ni­za­tion.

4. Proposal of a Revised Framework of Information Security Principles

In this sec­tion we present the revised frame­work of InfoSec prin­ci­ples and briefly describe the processes that led to its cre­ation.

The frame­work relies on the process of lit­er­a­ture review that was con­ducted, in which we iden­ti­fied a set of con­cepts that, by their def­i­n­i­tions and char­ac­ter­is­tics, were ini­tially con­sid­ered as InfoSec prin­ci­ples.

Sub­se­quently, this ini­tial set of prin­ci­ples was eval­u­ated in terms of com­plete­ness and whole­ness through its inter­sec­tion with the UTC. The pro­ce­dure adopted was to iden­tify the prin­ci­ple or prin­ci­ples affected by each of the threats, thus seek­ing to ensure that the pro­posed frame­work would encom­pass all threats con­tained in the adopted cat­a­log and assess the need to sug­gest new InfoSec prin­ci­ples due to unmatched threats to the ini­tial set of prin­ci­ples. Addi­tion­ally, we also related the UTC with the ele­ments of the WSM. The two pro­ce­dures helped us to draw broader con­clu­sions regard­ing the pre­pon­der­ance of the pro­posed prin­ci­ples and ele­ments of the WSM for InfoSec, pro­vid­ing addi­tional robust­ness to the pro­posal and also allow­ing a dif­fer­ent approach to the prob­lem under­ly­ing this work.

The frame­work for­mu­la­tion process was iter­a­tive in nature, since over the course of it we needed to revisit and adjust the def­i­n­i­tions that were ini­tially assumed to be appro­pri­ate, as well as to review the struc­ture and orga­ni­za­tion of the frame­work itself.

Next, we present the revised frame­work of InfoSec prin­ci­ples, fol­lowed by an expo­si­tion of a set of log­i­cal impli­ca­tions between the prin­ci­ples and a reap­praisal of the rela­tion­ship between the prin­ci­ples included in the frame­work and the UTC.

4.1 The Revised Framework

The pro­posed frame­work con­sists of thir­teen prin­ci­ples and five sub-prin­ci­ples, orga­nized as pre­sented in Fig­ure 4 and with the def­i­n­i­tions adopted for each com­po­nent enu­mer­ated in Table 16.

Figure 4

Fig­ure 4: Pro­posed Frame­work of InfoSec Prin­ci­ples

The InfoSec prin­ci­ples were grouped into five dis­tinct dimen­sions: To Know, To Change, To Use, To Com­ply, and To Be. This arrange­ment results from the main pur­pose of the prin­ci­ples per­tain­ing to each dimen­sion, i.e., inside each cat­e­gory, the con­stituent prin­ci­ples con­trib­ute to pro­tect the same inte­gra­tive issue.

Table 16: Adopted Def­i­n­i­tions for Infor­ma­tion Secu­rity Prin­ci­ples

Table 16

The To Know dimen­sion relates to the intrin­sic value of infor­ma­tion, i.e., its con­tent and mean­ing. Pri­vacy is pre­sented as a sub-prin­ci­ple of con­fi­den­tial­ity in that it refers exclu­sively to main­tain­ing the con­fi­den­tial­ity of spe­cific infor­ma­tion about peo­ple or their behav­ior, while the prin­ci­ple of con­fi­den­tial­ity regards to all kinds of infor­ma­tion. Hence, Pri­vacy is con­ceived as a spe­cial­iza­tion or refine­ment of the prin­ci­ple of Con­fi­den­tial­ity, although in cer­tain con­texts Pri­vacy can, by itself, stand out.

In what con­cerns Pos­ses­sion, it is con­sid­ered rel­e­vant to fol­low the per­spec­tive advanced by Parker [1998] accord­ing to which, the vio­la­tion of this prin­ci­ple does not nec­es­sar­ily imply to not hold and to not con­trol infor­ma­tion as one might assume based on the def­i­n­i­tion pre­sented. Infor­ma­tion, being an intan­gi­ble asset eas­ily replic­a­ble, may be in pos­ses­sion of sev­eral enti­ties. In this sense, a vio­la­tion, for exam­ple, of the prin­ci­ple of con­fi­den­tial­ity implies the vio­la­tion of the exclu­sive pos­ses­sion of cer­tain infor­ma­tion. How­ever, it is pos­si­ble to have a sit­u­a­tion of shared Pos­ses­sion where the owner of the infor­ma­tion con­tin­ues to own and con­trol the infor­ma­tion, sim­ply does not do so exclu­sively. The prin­ci­ples and sub-prin­ci­ples that inte­grate this dimen­sion essen­tially seek to pre­vent the dis­clo­sure to and obser­va­tion of infor­ma­tion by unau­tho­rized enti­ties, as well as con­trol of infor­ma­tion.

The sec­ond dimen­sion, To Change, focuses pri­mar­ily on actions that result in the mod­i­fi­ca­tion and manip­u­la­tion of infor­ma­tion. The prin­ci­ples inte­grat­ing this dimen­sion seek to ensure that any mod­i­fi­ca­tion of infor­ma­tion is autho­rized by its owner, that is made only in an autho­rized man­ner, by indi­vid­u­als, enti­ties, or processes whose iden­tity is ver­i­fi­able and that the infor­ma­tion accu­rately reflects a cer­tain real­ity.

It is impor­tant to clar­ify the mean­ing of the def­i­n­i­tion of the prin­ci­ple of Accu­racy. In a first approach, one could con­sider Accu­racy not as a prin­ci­ple, but as a sub-prin­ci­ple of Authen­tic­ity, since its def­i­n­i­tion meets the first part of the def­i­n­i­tion of Authen­tic­ity ”... accu­rately reflects a cer­tain real­ity...”, how­ever, and although it is rec­og­nized that there is some over­lap between the two prin­ci­ples, the def­i­n­i­tion of accu­racy focuses par­tic­u­larly on avoid­ing mis­takes, fail­ures, and omis­sions related to form and con­tent. Illus­tra­tive exam­ples are errors such as a mis­placed comma in a numeric field, an extra zero or a miss­ing zero, or typos. In this regard it is con­sid­ered impor­tant to stress Accu­racy of infor­ma­tion as an InfoSec prin­ci­ple.

The third dimen­sion To Use relates directly to the abil­ity to use infor­ma­tion or the sys­tems that sup­port and manip­u­late it. The obser­vance of Avail­abil­ity, Reli­a­bil­ity, Sur­viv­abil­ity, and Util­ity seeks to ensure that both infor­ma­tion and sys­tems that han­dle it are avail­able for use by autho­rized enti­ties, reli­ably, main­tain­ing a suf­fi­cient degree of oper­a­tion even after attack or fail­ure, and when­ever nec­es­sary.

Reli­a­bil­ity is con­sid­ered a sub-prin­ci­ple of Avail­abil­ity. Based on the def­i­n­i­tion adopted for Reli­a­bil­ity, we con­sider that this con­cept is a pre­req­ui­site (though not exclu­sively) to the full obser­vance of the prin­ci­ple of Avail­abil­ity, and there­fore does not jus­tify to be ele­vated to a prin­ci­ple by its own. A sim­i­lar rea­son­ing guided the deci­sion of clas­si­fy­ing Sur­viv­abil­ity as a sub-prin­ci­ple of Avail­abil­ity, pair­ing with Reli­a­bil­ity.

Regard­ing the prin­ci­ple of Util­ity, a super­fi­cial analy­sis could lead to not con­sider this prin­ci­ple as an InfoSec prin­ci­ple, but as a gen­eral prin­ci­ple that infor­ma­tion and sys­tems must meet. Nev­er­the­less, from an orga­ni­za­tional point of view, espe­cially from the point of view of infor­ma­tion sys­tems secu­rity man­age­ment, Util­ity is par­tic­u­larly rel­e­vant as it does not mat­ter to an orga­ni­za­tion to pro­tect infor­ma­tion or sys­tems that are not use­ful to its activ­ity, i.e., it does not make sense to apply human and finan­cial resources to ensure the avail­abil­ity of use­less infor­ma­tion, pos­si­bly neglect­ing the pro­tec­tion of strate­gic infor­ma­tion and sys­tems for the orga­ni­za­tion.

The fourth dimen­sion of the frame­work To Com­ply includes a set of prin­ci­ples and sub-prin­ci­ples which aims to address the need for con­trol­ling pro­ce­dures in an orga­ni­za­tion, as well as ensur­ing tech­no­log­i­cal and reg­u­la­tory com­pli­ance of InfoSec efforts.

We pro­pose the prin­ci­ple of Trace­abil­ity as aris­ing from the uni­fi­ca­tion of the con­cepts of Account­abil­ity and Non-repu­di­a­tion, in order to get a more com­plete and com­pre­hen­sive prin­ci­ple. There­fore, the obser­vance of the prin­ci­ple of Trace­abil­ity and sub-prin­ci­ples Non-repu­di­a­tion and Account­abil­ity ensures that any action rel­e­vant to InfoSec is prov­able, i.e., there are records of those actions, and that it can be unequiv­o­cally attrib­uted to a spe­cific indi­vid­ual, entity, or process.

The prin­ci­ple of Legal­ity focuses on safe­guard­ing neg­a­tive impacts on InfoSec result­ing from leg­is­la­tion and reg­u­la­tion inob­ser­vance and on safe­guard­ing the orga­ni­za­tion itself in civil and crim­i­nal terms. In some coun­tries like the USA there are, for exam­ple, restric­tions on the use of encryp­tion tech­niques that can make it impos­si­ble to use cer­tain infor­ma­tion and jeop­ar­dize its secu­rity. Fur­ther­more, in the cur­rent glob­al­ized world, the knowl­edge, imple­men­ta­tion, and com­pli­ance with InfoSec related leg­is­la­tion is par­tic­u­larly rel­e­vant. It should be noted that this prin­ci­ple was not iden­ti­fied in the lit­er­a­ture reviewed, result­ing from the process of relat­ing InfoSec prin­ci­ples to InfoSec threats, and from the acknowl­edg­ment that reg­u­la­tory aspects cur­rently have a sig­nif­i­cant impact on InfoSec.

The fifth dimen­sion To Be addresses what mem­bers of an orga­ni­za­tion must be within that orga­ni­za­tion in order to main­tain the well-being and via­bil­ity of the orga­ni­za­tion. The con­stituent prin­ci­ples focus on the behav­iors that indi­vid­u­als should adopt, espe­cially when faced with new and unfore­seen sit­u­a­tions for which there are not for­mal­ized rules or codes of prac­tice. These prin­ci­ples are depen­dent upon the val­ues, beliefs, and per­sonal moti­va­tions of the orga­ni­za­tions mem­bers, con­tribut­ing to the estab­lish­ment and main­te­nance of an InfoSec cul­ture [Dhillon 2007]. This dimen­sion has a prin­ci­ple (Integrity) whose des­ig­na­tion is the same as the des­ig­na­tion given to a prin­ci­ple per­tain­ing to the To Change dimen­sion. We chose to keep the des­ig­na­tions that were in use by tra­di­tion or as named by its pro­po­nents. Nev­er­the­less, the seman­tics asso­ci­ated to each of those prin­ci­ples is clearly dis­tinct.

4.2 Relationships between Principles of the Framework

In this sec­tion we make explicit rela­tion­ships between some of the prin­ci­ples that com­pose the revised frame­work, in the form of log­i­cal impli­ca­tion propo­si­tions. These propo­si­tions are exclu­sively sup­ported and based on the def­i­n­i­tions that were pro­vided in Table 16. In Table 17 we show the log­i­cal propo­si­tions and the cor­re­spond­ing inter­pre­ta­tion in nat­ural lan­guage.

Table 17: Rela­tion­ships of Log­i­cal Impli­ca­tion between Prin­ci­ples of Infor­ma­tion Secu­rity

Table 17

Regard­ing the first propo­si­tion it is log­i­cal to assume that a breach of con­fi­den­tial­ity will, per­force, imply a breach of pos­ses­sion. How­ever, the inverse propo­si­tion does not hold, i.e., a breach of pos­ses­sion does not nec­es­sar­ily imply a breach of con­fi­den­tial­ity. For exam­ple, in the case of neg­li­gent or acci­den­tal dele­tion of con­fi­den­tial infor­ma­tion there is a breach of the prin­ci­ple of pos­ses­sion, in the sense that infor­ma­tion is no longer owned and con­trolled, but there is no dis­clo­sure of infor­ma­tion to unau­tho­rized enti­ties, the infor­ma­tion sim­ply ceased to exist. Ana­lyz­ing this propo­si­tion, not under the per­spec­tive of the vio­la­tion of the prin­ci­ples, but under the per­spec­tive of its preser­va­tion, one can then infer that the preser­va­tion of pos­ses­sion implies the preser­va­tion of con­fi­den­tial­ity. As an illus­tra­tive exam­ple, if an indi­vid­ual or orga­ni­za­tion has the own­er­ship (as per the def­i­n­i­tion given for pos­ses­sion) of a given infor­ma­tion, that infor­ma­tion will only cease being con­fi­den­tial if the owner so desires. It should be reminded that pos­ses­sion is here under­stood as exclu­sive pos­ses­sion.

The sec­ond propo­si­tion fol­lows the same logic advanced for the first propo­si­tion, and derives from the fact that pri­vacy is con­sid­ered a sub-prin­ci­ple of con­fi­den­tial­ity.

Regard­ing the third propo­si­tion, it is rel­e­vant to note that infor­ma­tion or sys­tems to be use­ful must nec­es­sar­ily be avail­able when they are needed, oth­er­wise there is a vio­la­tion of the prin­ci­ple of util­ity, in the sense that if they are not avail­able they can­not be used for a par­tic­u­lar pur­pose.

How­ever, the inverse propo­si­tion (¬Util­ity ⇒ ¬Avail­abil­ity) does not hold. For exam­ple, an orga­ni­za­tion may have at its dis­posal a large amount of infor­ma­tion, there­fore fully avail­able, which does not serve its pur­pose, being use­less infor­ma­tion which there is no inter­est to pro­tect. From the point of view of the preser­va­tion of the prin­ci­ples (instead of their vio­la­tion) it fol­lows that the preser­va­tion of util­ity implies the preser­va­tion of avail­abil­ity. In light of the def­i­n­i­tion given to util­ity, infor­ma­tion will only be used if it is avail­able.

In the fourth propo­si­tion we relate the prin­ci­ple of authen­tic­ity to the prin­ci­ple of trace­abil­ity. The obser­vance of the prin­ci­ple of trace­abil­ity and con­se­quently of its sub-prin­ci­ples, non-repu­di­a­tion and account­abil­ity, pre­sup­poses the exis­tence of authen­tic infor­ma­tion in accor­dance with the def­i­n­i­tion advanced for authen­tic­ity. If this con­di­tion is not ver­i­fied, e.g., if there is false infor­ma­tion recorded about the iden­tity of a user who manip­u­lated cer­tain infor­ma­tion, the prin­ci­ple of trace­abil­ity is vio­lated since it is not pos­si­ble to unam­bigu­ously deter­mine who in fact manip­u­lated the infor­ma­tion. This is an inter­est­ing propo­si­tion because it relates prin­ci­ples per­tain­ing to two dif­fer­ent dimen­sions.

The impli­ca­tion rela­tion­ships that were advanced are those that we assume as uni­ver­sal, i.e., ver­i­fi­able in any con­text in the light of the pro­posed def­i­n­i­tions. In cer­tain con­texts or par­tic­u­lar cases it may be pos­si­ble to infer other log­i­cal propo­si­tions.

4.3 Revisiting the Relationship between InfoSec Threats and InfoSec Principles

The pro­posed frame­work of InfoSec prin­ci­ples com­bines a set of def­i­n­i­tions for the con­stituent prin­ci­ples with a struc­tura­tion of the prin­ci­ples. These two fea­tures of the frame­work jus­tify revis­it­ing the pre­vi­ously estab­lished rela­tion­ships between InfoSec threats, based on the UTC, and the InfoSec prin­ci­ples. The analy­sis of the rela­tion­ships pro­vides a holis­tic and quan­ti­ta­tive view of the rela­tions between InfoSec threats and the InfoSec prin­ci­ples as pro­posed in the frame­work.

Fig­ure 5 dis­plays the InfoSec prin­ci­ples with the high­est num­ber of matches with the UTC threats.

The results reflect the impli­ca­tion rela­tion­ships described in Sec­tion 4.2, with the prin­ci­ples of Pos­ses­sion, Avail­abil­ity, Util­ity, Con­fi­den­tial­ity and Integrity being the most affected. How­ever, this does not mean that we can over­look or under­es­ti­mate the threats to the remain­ing prin­ci­ples since we argue that infor­ma­tion secu­rity depends on the obser­vance of the frame­work as a whole.

It is also per­ti­nent to men­tion that the threat cat­a­logs that led to the UTC gen­er­ally fol­low the tra­di­tional CIA model, which helps to explain the preva­lence of threats on the tra­di­tional InfoSec prin­ci­ples (the results obtained for Pos­ses­sion and Util­ity, which are not part of the tra­di­tional model, derive from the log­i­cal impli­ca­tion propo­si­tions exposed in Sec­tion 4.2).

Figure 5

Fig­ure 5: Num­ber of Matches UTC vs. InfoSec Prin­ci­ples

Another per­spec­tive is pro­vided by the analy­sis of results obtained for the dimen­sions that con­sti­tute the revised frame­work, as illus­trated in Fig­ure 6.

Figure 6

Fig­ure 6: Num­ber of Matches UTC by Dimen­sion

The dimen­sion To Know, con­sti­tuted by the prin­ci­ples of Con­fi­den­tial­ity and Pos­ses­sion, and by the sub-prin­ci­ple of Pri­vacy, and the dimen­sion To Use, con­sti­tuted by the prin­ci­ples of Avail­abil­ity and Util­ity, and by the sub-prin­ci­ples Reli­a­bil­ity and Sur­viv­abil­ity, are those that con­dense a greater num­ber of matches.

We argue that the absence of matches with the prin­ci­ples of the To Be dimen­sion should not prompt the removal of the cor­re­spond­ing prin­ci­ples or be inter­preted as a sign of sub­sidiar­ity of those prin­ci­ples in rela­tion to all the other prin­ci­ples. The com­mon view in InfoSec points peo­ple as the weak­est link of the InfoSec pro­tec­tion efforts, thus imply­ing that a spe­cial con­sid­er­a­tion of the role of peo­ple should be taken into con­sid­er­a­tion. As a com­ple­ment to that view, we argue that peo­ple is also the strongest link of the InfoSec pro­tec­tion efforts: not only the qual­ity and appro­pri­ate­ness of secu­rity con­trols are a func­tion of the indi­vid­u­als that design, imple­ment, inter­pret, and main­tain those con­trols, but also the mem­bers of the orga­ni­za­tion form the last line of defense against threats not addressed or only par­tially cov­ered by tech­ni­cal safe­guards. Instead of dis­card­ing the To Be prin­ci­ples, we urge to the devel­op­ment of an encom­pass­ing cat­a­log of threats that takes into account the per­sonal and social dimen­sions of InfoSec.

5. Metrics for the Information Security Principles

After pre­sent­ing the revised frame­work of InfoSec prin­ci­ples, we are now in posi­tion to address the issue of mea­sure­ment. In this sec­tion we sug­gest for each prin­ci­ple met­rics that may assist, in case of attack or fail­ure, to assess the extent to which that prin­ci­ple was com­pro­mised.

It should be noted from the begin­ning that the sug­gested met­rics are purely con­cep­tual result­ing directly from the def­i­n­i­tions adopted for each of the InfoSec prin­ci­ples and lack­ing tests in real envi­ron­ments. Basi­cally, it is an ini­tial effort to mit­i­gate the gaps iden­ti­fied in the lit­er­a­ture on InfoSec met­rics, and to assign mea­sures directly to the InfoSec prin­ci­ples.

Research on InfoSec met­rics is rel­a­tively recent and there are no con­sol­i­dated ref­er­ences widely accepted by the sci­en­tific com­mu­nity, InfoSec pro­fes­sion­als and man­agers to assess the level of InfoSec of an orga­ni­za­tion [Pfleeger 2009].

The inex­is­tence of mea­sure­ment ref­er­ences accrue from sev­eral fac­tors, includ­ing the dif­fi­culty of mea­sur­ing InfoSec [Pfleeger and Cun­ning­ham 2010]; depen­dence on sub­jec­tive, human and qual­i­ta­tive inputs, illu­sory means to obtain mea­sure­ments; lack of under­stand­ing of infor­ma­tion secu­rity mech­a­nisms [Jansen 2011], imma­tu­rity of research efforts and frag­men­ta­tion of the knowl­edge areas that need to com­bine efforts to pro­duce a holis­tic model for InfoSec mea­sure­ment [Savola 2007]. This does not mean, how­ever, the absence of impor­tant con­tri­bu­tions for InfoSec eval­u­a­tion over time, such as TCSEC, SSE-CMM (Sys­tems Secu­rity Engi­neer­ing Capa­bil­ity Matu­rity Model), and Com­mon Cri­te­ria, as well as pro­pos­als of high level tax­onomies for InfoSec met­rics (cf. Chew et al. [2008], CISWG [2005], Savola [2007], Sed­digh et al. [2004]).

Besides these con­tri­bu­tions, NIST and ISO have pro­duced two major works regard­ing InfoSec met­rics, namely NIST SP 800-55 Rev. 1 and ISO/IEC 27004, which pro­vide guide­lines for the devel­op­ment, selec­tion and imple­men­ta­tion of InfoSec mea­sures. These doc­u­ments include illus­tra­tive and can­di­date InfoSec mea­sures to eval­u­ate the effec­tive­ness and qual­ity of orga­ni­za­tions infor­ma­tion secu­rity pro­tec­tion efforts. This stated pur­pose for eval­u­a­tion is con­sis­tent with sev­eral def­i­n­i­tions of InfoSec met­rics, such as quan­ti­ta­tive mea­sure­ments of trust indi­cat­ing how well a sys­tem meets the secu­rity require­ments [Wang 2005] and mea­sur­able stan­dards to mon­i­tor the effec­tive­ness of goals and objec­tives estab­lished for IT secu­rity [Patri­ciu et al. 2006].

Accord­ing to Wang [2005], the InfoSec met­rics that have been devel­oped suf­fer from five lim­i­ta­tions: secu­rity met­rics are often qual­i­ta­tive rather than quan­ti­ta­tive, sub­jec­tive rather than objec­tive, defined with­out a for­mal model as an under­lined sup­port, there is no time aspect asso­ci­ated with the cur­rent secu­rity met­ric def­i­n­i­tions, and tra­di­tional two-value log­ics are not suit­able for secu­rity analy­sis.

To coun­ter­act this state of affairs regard­ing InfoSec mea­sures, Jaquith [2007] and Jansen [2011] pro­posed sev­eral char­ac­ter­is­tics that met­rics should show in order to be use­ful, effec­tive, and objec­tive. Among the char­ac­ter­is­tics that met­rics should have are the fol­low­ing: to be con­sis­tently mea­sured, to have a low cost of imple­men­ta­tion, to be expressed numer­i­cally or as a per­cent­age, to use a unit of mea­sure­ment, and to be rel­e­vant for those who are going to ana­lyze them.

With these fea­tures in mind, and aim­ing to eval­u­ate the exten­sion of com­pro­mise of the InfoSec prin­ci­ples, we pro­pose the basic set of met­rics listed in Table 18.

Table 18: Met­rics for Infor­ma­tion Secu­rity Prin­ci­ples

Table 18

As noted, the met­rics flow directly from the def­i­n­i­tions that were adopted for the InfoSec prin­ci­ples and form a first iter­a­tion to achieve a set of met­rics that shows the ideal char­ac­ter­is­tics pre­vi­ously enu­mer­ated. In future ver­sions, more sophis­ti­cated for­mu­la­tions of the met­rics should take into account issues such as the value of infor­ma­tion affected, the crit­i­cal­ity of sys­tems impaired, the costs of the inci­dent, and the effect of time.

Two of the main fea­tures of the set of pro­posed met­rics are its sim­plic­ity and ex post nature. The met­rics are sim­ple to under­stand and to quan­tify, although some require the col­lec­tion of based data and the estab­lish­ment of the cor­re­spond­ing data col­lec­tion struc­tures [e.g., to mea­sure con­fi­den­tial­ity related breaches it is needed to pre­vi­ously per­form an infor­ma­tion inven­tory, to clas­sify infor­ma­tion and to define the enti­ties autho­rized to access infor­ma­tion). They are also a pos­te­ri­ori or ex post mea­sures, i.e., they focus on after the fact events, since they pro­vide indi­ca­tions regard­ing com­pro­mise of InfoSec prin­ci­ples. This implies that the accu­racy of the mea­sures is totally depen­dent on the detec­tion capa­bil­i­ties of the orga­ni­za­tion: if a breach is not known or acknowl­edged by the orga­ni­za­tion, the respec­tive mea­sure will not reflect it.

In con­trast to other InfoSec pro­posed mea­sures, such as the ones advanced in NIST SP 800-55 Rev. 1, the sug­gested set of met­rics does not pro­vide an indi­ca­tion of the esti­mated qual­ity and effi­cacy of InfoSec pro­tec­tion efforts. Actu­ally, it com­ple­ments those kinds of mea­sures. Instead of mea­sur­ing the bud­get devoted to infor­ma­tion secu­rity, the per­cent­age of high vul­ner­a­bil­i­ties mit­i­gated within orga­ni­za­tion­ally defined time peri­ods after dis­cov­ery, the per­cent­age of secu­rity per­son­nel that have received secu­rity train­ing, or the per­cent­age of sys­tems that have con­ducted annual con­tin­gency plan test­ing, it gives evi­dence of the actual effec­tive­ness of InfoSec pro­tec­tion efforts. An impor­tant future line of research would be to define an alter­na­tive set of met­rics, directly con­nected to the InfoSec prin­ci­ples, that instead of assess­ing the extent to which each of the prin­ci­ples has been com­pro­mised, indi­cates how well a par­tic­u­lar secu­rity con­trol con­trib­utes to the preser­va­tion of those prin­ci­ples.

6. Conclusion

The grow­ing depen­dence of orga­ni­za­tions on infor­ma­tion and IT jus­ti­fies the exis­tence of updated ref­er­ences that assist orga­ni­za­tions to pro­tect their infor­ma­tional assets. Over time, sev­eral authors have argued for an update group of prin­ci­ples that may guide the infor­ma­tion secu­rity efforts of orga­ni­za­tions, both by review­ing the mean­ings of cur­rent prin­ci­ples, and by sug­gest­ing addi­tional prin­ci­ples that help InfoSec stake­hold­ers to keep up with the evo­lu­tion of busi­ness require­ments, threats, and tech­nol­ogy.

In view of this con­tin­ual need to recon­sider the foun­da­tions that define infor­ma­tion secu­rity, we pro­posed a revised frame­work of infor­ma­tion secu­rity prin­ci­ples struc­tured in five dimen­sions con­tain­ing thir­teen prin­ci­ples and five sub-prin­ci­ples. Each of the com­po­nents of the frame­work was defined and sup­ple­mented with a basic and ini­tial set of met­rics.

We hope that these con­tri­bu­tions may prove use­ful for the man­age­ment of infor­ma­tion secu­rity in orga­ni­za­tions, assist­ing its stake­hold­ers to engage in a dia­logue regard­ing the goals of infor­ma­tion pro­tec­tion and the means that best accom­plish the attain­ment of an appro­pri­ate infor­ma­tion secu­rity level.

The pro­posed frame­work of infor­ma­tion secu­rity prin­ci­ples is not final and should be open to debate and revi­sion. It is our expec­ta­tion that it may prompt the devel­op­ment of an updated cat­a­log of infor­ma­tion secu­rity threats closely related or rooted in those prin­ci­ples, as well as the emer­gence of new insights regard­ing more mature and sophis­ti­cated InfoSec effec­tive­ness met­rics.

Acknowledgments

This work is funded by FEDER funds through Pro­grama Opera­cional Fatores de Com­pet­i­tivi­dade—COM­PETE and National funds by FCT—Fun­dao para a Cin­cia e Tec­nolo­gia under Pro­ject FCOMP-01-0124-FEDER-022674.

References