Orig­i­nal source pub­li­ca­tion: da Costa, F. and F. de Sá-Soares (2017). Authen­tic­ity Chal­lenges of Wear­able Tech­nolo­gies. In Mar­ring­ton, A., D. Kerr and J. Gam­mack (Eds.), Man­ag­ing Secu­rity Issues and the Hid­den Dan­gers of Wear­able Tech­nolo­gies, 98–130, Her­shey: IGI Global.
The final pub­li­ca­tion is avail­able here.

Authen­tic­ity Chal­lenges of Wear­able Tech­nolo­gies

Fil­ipe da Costa and Fil­ipe de Sá-Soares

Uni­ver­sity of Minho, Por­tu­gal

Abstract

In this chap­ter the secu­rity chal­lenges raised by wear­able tech­nolo­gies con­cern­ing the authen­tic­ity of infor­ma­tion and sub­jects are dis­cussed. Fol­low­ing a con­cep­tu­al­iza­tion of the capa­bil­i­ties of wear­able tech­nol­ogy, an authen­tic­ity analy­sis frame­work for wear­able devices is pre­sented. This frame­work includes graphic clas­si­fi­ca­tion classes of authen­tic­ity risks in wear­able devices that are expected to improve the aware­ness of users on the risks of using those devices, so that they can mod­er­ate their behav­iors and take into account the inclu­sion of con­trols aimed to pro­tect authen­tic­ity. Build­ing on the results of the appli­ca­tion of the frame­work to a list of wear­able devices, a solu­tion is pre­sented to mit­i­gate the risk for authen­tic­ity based on dig­i­tal sig­na­tures.

Introduction

For a long time infor­ma­tion secu­rity man­age­ment has been based on the CIA triad, the acronym denot­ing the prin­ci­ples1 of Con­fi­den­tial­ity, Integrity, and Avail­abil­ity. Over time, the suf­fi­ciency and appro­pri­ate­ness of these three cor­ner­stone prin­ci­ples of infor­ma­tion secu­rity have been chal­lenged by sev­eral authors. In 1998, Parker com­ple­mented them with three new prin­ci­ples, namely Own­er­ship, Authen­tic­ity, and Util­ity [Parker 1998]. The arrival of the new mil­len­nium with the need for orga­ni­za­tions to adopt more agile and flat struc­tures led Dhillon and Back­house [2000] to argue for the inclu­sion of four peo­ple-related prin­ci­ples, known under the RITE acronym, mean­ing Respon­si­bil­ity, Integrity, Trust, and Eth­i­cal­ity. More recently, Teix­eira and de Sá-Soares [2013] pro­posed a revised frame­work com­posed of thir­teen infor­ma­tion secu­rity prin­ci­ples and five sub-prin­ci­ples.

In a sense, these sets of infor­ma­tion secu­rity prin­ci­ples con­vey world­views con­cern­ing the the­ory and prac­tice of infor­ma­tion secu­rity. But new tech­nol­ogy may alter our world­views. An illus­tra­tive case is the emer­gence and evo­lu­tion of wear­able tech­nolo­gies and mobile com­put­ing devices offer­ing us true infor­ma­tion sys­tems in our pocket, on our wrist, or through our glasses. These tech­nolo­gies are being equipped with ever-stronger infor­ma­tion acqui­si­tion, stor­age, pro­cess­ing, dis­play, and com­mu­ni­ca­tion capa­bil­i­ties. By adopt­ing and using wear­able tech­nolo­gies in our daily activ­i­ties, we are on the verge of a rev­o­lu­tion that brings the poten­tial to change the way we live, think, feel, and act.

What chal­lenges will this new era bring us? What will be the impact of wear­able tech­nolo­gies on our cur­rent accepted infor­ma­tion secu­rity prin­ci­ples? Will we need to revamp them? Will we be forced to add new prin­ci­ples? Or will we even have to aban­don prin­ci­ples once taken as a main­stay?

Wearcams con­nected to the Inter­net and shar­ing images in real time pose new chal­lenges to con­fi­den­tial­ity. Wear­able GPS (Global Posi­tion­ing Sys­tem) devices (as sim­ple as most com­mon cell phones) shrink the fron­tiers of per­sonal pri­vacy. Los­ing our smart­phone puts us out of sync with the world and makes us unavail­able to oth­ers. These all exem­plify issues that wear­able tech­nolo­gies may raise to infor­ma­tion secu­rity prin­ci­ples. But among the prin­ci­ples, we are par­tic­u­larly inter­ested in the impacts of wear­able tech­nolo­gies on authen­tic­ity, here defined asInfor­ma­tion is in accor­dance with a par­tic­u­lar real­ity, and its gen­uine­ness and valid­ity are ver­i­fi­able, or an indi­vid­ual, entity or process is who it claims to be” [Teix­eira and de Sá-Soares 2013]. This inter­est in authen­tic­ity stems from the fact that, in a sce­nario where all peo­ple are con­nected, not directly, but through their devices or wear­able tech­nol­ogy, it is cru­cial to develop mech­a­nisms to ensure that infor­ma­tion received is real and that the sub­jects we inter­act with are who they claim to be.

Wear­able tech­nolo­gies may be con­ceived as cog­ni­tive pros­the­ses that expand our human capa­bil­i­ties. Increased vol­umes of infor­ma­tion; vir­tual and aug­mented real­ity; sen­sors feed­ing us real time news, opin­ions, restau­rant sug­ges­tions, and likes from friends; apps a fin­ger­tip away, all extend what we know, and shape what we do or choose. Radio-Fre­quency IDen­ti­fi­ca­tion (RFID) tags make now pos­si­ble the Inter­net of Things (IoT). In fact, in anall con­nected” soci­ety, wear­able tech­nolo­gies make pos­si­ble the Inter­net of Peo­ple (IoP). Will we exchange wear­able tech­nolo­gies or are one’s own wear­able tech­nolo­gies so per­sonal that with­out them one will feel naked? It will not take much for wear­able tech­nolo­gies to become blended with the body, in a mor­ph­ing process of tech­nol­ogy and human tis­sue (e.g. implanta­bles), giv­ing rise to bionic enti­ties and redefin­ing our iden­tity, rais­ing many new ques­tions.

How will we assure that the infor­ma­tion we receive through our wear­able tech­nolo­gies is in accor­dance with real­ity? How will we ver­ify the gen­uine­ness and valid­ity of that infor­ma­tion? Rather, will that even be pos­si­ble? How can we be sure that an indi­vid­ual, entity or process that dig­i­tally addresses us is whom it claims to be? How can we ascer­tain who we really are? Will we know who we really will be? How do we prove that we are authen­tic? Will the machines we wear become autonomous and when they are capa­ble of self-pro­gram­ming will look at us, as we look now at things, as other devices? Can these devices imper­son­ate us? How much con­trol will we have over our wear­able tech­nolo­gies? Will we be aware of pro­vid­ing our infor­ma­tion to third par­ties with­out any guar­an­tee about which infor­ma­tion is really shared? Will we mea­sure up with our wear­able devices in terms of intel­li­gence? Will we inherit the bugs of the devices we wear? Will the Nokia slo­ganCon­nect­ing peo­ple” in the future make no sense? Maybe we will know the inter­face for peo­ple, not the per­sons.

Against this back­ground, this chap­ter begins by review­ing the con­cepts of authen­tic­ity and wear­able tech­nol­ogy. Then, it presents thewear­able ecosys­tem” and its under­ly­ing dan­gers, dis­cussing how cur­rent and fore­see­able wear­able tech­nolo­gies may impact on the authen­tic­ity of infor­ma­tion and sub­jects. There­after, it sug­gests a clas­si­fi­ca­tion sys­tem to help users to be aware of the risks of wear­able devices on authen­tic­ity and to assist them man­ag­ing such risks. As a pro­posal for future research, a hier­ar­chi­cal scheme for dig­i­tal sig­na­tures is pro­posed to mit­i­gate the risks of authen­tic­ity for infor­ma­tion and sub­jects when immersed in the wear­able ecosys­tem.

Background

To bet­ter under­stand the authen­tic­ity chal­lenges posed by wear­able tech­nol­ogy, this sec­tion defines and dis­cusses the key con­cepts involved, namely Authen­tic­ity and Wear­able Tech­nol­ogy.

Authenticity

Explic­itly sug­gested by Parker [1998] as an infor­ma­tion secu­rity prin­ci­ple, in this chap­ter the word authen­tic­ity is used to express the qual­ity of infor­ma­tion that isin accor­dance with a par­tic­u­lar real­ity, and its gen­uine­ness and valid­ity are ver­i­fi­able, or that an indi­vid­ual, entity, or process is who it claims to be” [Teix­eira and de Sá-Soares 2013, p. 32].

Just as a mod­ern server uses its dig­i­tal cer­tifi­cate to prove its authen­tic­ity, employ­ing it to authen­ti­cate its mes­sages, in the Mid­dle Ages kings had per­sonal seals, which were used to seal the impor­tant com­mu­ni­ca­tions of their king­doms. Seals pre­vented the mes­sage from being read by oth­ers with­out the knowl­edge of the intended receiver, thus ensur­ing the mes­sage’s con­fi­den­tial­ity. The recip­i­ent could also ver­ify the mes­sage’s integrity and authen­tic­ity. If the seal remained intact, the mes­sage received would be the one the sender wrote, i.e., it will not have been sub­jected to any mod­i­fi­ca­tion, and so main­tained its integrity. In this case, the proof of a mes­sage’s authen­tic­ity rested on the fact that the seal was unique and not trans­fer­able.

Even though the con­cepts of integrity and authen­tic­ity have resem­blance, they should not be con­fused, as explained by Parker [1998]: we may assure that a cer­tain mes­sage is faith­ful, but it may not be authen­tic, e.g., due to mis­ap­pro­pri­a­tion of its author’s iden­tity, or if a user enters false infor­ma­tion into a com­puter sys­tem it may have vio­lated authen­tic­ity, but as long as the infor­ma­tion remains unal­tered, it has integrity. Thus, it is cru­cial to rig­or­ously define each secu­rity prin­ci­ple, in order to clar­ify and con­trast mean­ings and to effec­tively plan their com­pli­ance.

Decom­pos­ing the adopted def­i­n­i­tion of authen­tic­ity, we can imme­di­ately sep­a­rate this prin­ci­ple in its main com­po­nents, namely by divid­ing the def­i­n­i­tion into three key parts, two of which refer to infor­ma­tion and the third to indi­vid­u­als, enti­ties and processes. Hence,

To improve the under­stand­ing of this con­cept, it is rel­e­vant to iden­tify other infor­ma­tion secu­rity prin­ci­ples to which authen­tic­ity relates. Besides the prox­im­ity to integrity, authen­tic­ity has a rela­tion­ship with the secu­rity prin­ci­ple of trace­abil­ity. In the realm of infor­ma­tion secu­rity, we employ the word trace­abil­ity to sig­nify thatactions rel­e­vant to infor­ma­tion secu­rity are observ­able and imputable to their authors” [Teix­eira and de Sá-Soares 2013]. If one can­not guar­an­tee the prin­ci­ple of authen­tic­ity, one can­not guar­an­tee the prin­ci­ple of trace­abil­ity. In other words, the vio­la­tion of authen­tic­ity implies the vio­la­tion of trace­abil­ity: it is not pos­si­ble to
prove the author­ship nor the sequence of a set of per­formed actions if the infor­ma­tion or the sub­jects involved are not authen­tic.

A con­cept that is often con­fused with authen­tic­ity is authen­ti­ca­tion, being in some cases con­sid­ered or at least assumed as syn­onyms or indis­tin­guish­able (cf. [Ben Aissa et al. 2009] and [Liu et al. 2012]). How­ever, we can­not regard them as syn­ony­mous, since they belong to entirely dif­fer­ent classes in the domain of infor­ma­tion secu­rity: while authen­tic­ity should be regarded as a prin­ci­ple of secu­rity, authen­ti­ca­tion is a secu­rity con­trol intended to assure that a claimed fea­ture of an entity is cor­rect [ISO/IEC 2014]. In the realm of this dis­cus­sion, authen­ti­ca­tion is a secu­rity process that con­ducts the test of authen­tic­ity of a sub­ject. Another secu­rity process related to the proof of authen­tic­ity of an entity is iden­ti­fi­ca­tion. While in iden­ti­fi­ca­tion a secu­rity con­trol plays an active role aim­ing to iden­tify an entity, for exam­ple, via bio­met­ric devices, in authen­ti­ca­tion a secu­rity con­trol plays a more pas­sive role–the entity firstly claims to have a cer­tain iden­tity and after­wards the secu­rity con­trol aims to con­firm the iden­tity of the entity, for exam­ple, through a login pro­ce­dure (user­name/pass­word). It is com­mon to char­ac­ter­ize authen­ti­ca­tion as a 1:1 process (check if an entity is who it claims to be—a one to one match) and iden­ti­fi­ca­tion as an n:1 process (among a set of n pos­si­ble enti­ties, find the one whose iden­tity is being claimed).

Importance of Authenticity

We live in a soci­ety dom­i­nated by the con­tin­u­ous exchange of infor­ma­tion, a 24/7 news cycle, posts on the wall of our social net­work, and many other forms and media. How­ever, given the ease of today’s com­mu­ni­ca­tion, we must take into account the qual­ity of infor­ma­tion that reaches us, espe­cially if it is authen­tic. Cer­tainly, not all infor­ma­tion that reaches us is reli­able–it might have been han­dled or the source might not be cred­i­ble. There­fore, we must seek to ensure that infor­ma­tion is in accor­dance with our real­ity and that it is gen­uine, so as not to allow that a fic­tional or even a false real­ity is being pro­jected upon us. Hence, it becomes increas­ingly impor­tant to be equipped with mech­a­nisms that allow us to make this val­i­da­tion and selec­tion. Since cyber­crime is a grow­ing phe­nom­e­non [Zingerle and Kro­n­man 2013], com­puter foren­sics assumes greater promi­nence. In the realm of these inves­ti­ga­tions, authen­tic­ity plays a cru­cial role, in that in any dis­pute, legal or judi­cial, it is impor­tant to con­firm that the infor­ma­tion stored in our devices, pur­port­edly result­ing from our actions, is authen­tic [Zhao et al. 2012]. In foren­sic inves­ti­ga­tions, that infor­ma­tion may be used as a basis for pro­fil­ing [Mar­ring­ton et al. 2007], allow­ing the track­ing of our actions and places vis­ited. In case the authen­tic­ity of the infor­ma­tion source is com­pro­mised, we may face prob­lems order­ing the events, result­ing in severe incon­sis­ten­cies in the time­line of facts. This issue can be a major prob­lem dur­ing the evi­dence val­i­da­tion process [Mar­ring­ton et al. 2011].

Indeed, the infor­ma­tion col­lected can only be used as evi­dence if its authen­tic­ity is guar­an­teed, as well as the authen­tic­ity of its source and the processes involved in its pro­duc­tion or trans­mis­sion; oth­er­wise, one can­not ensure the non-repu­di­a­tion of cer­tain actions. If on the one hand, with­out this guar­an­tee, some­one or some entity or process can imper­son­ate another and thereby incrim­i­nate it—in this case the accused would be pun­ished unfairly, on the other hand, a crim­i­nal aware of this lack of guar­an­tee can pro­pose the elim­i­na­tion of proof—in this case the accused escapes unpun­ished for a crime that was even­tu­ally per­pe­trated.

In health sys­tems, intel­li­gent trans­port sys­tems, gov­ern­ment appli­ca­tions, and other crit­i­cal areas where the exchange of infor­ma­tion is cru­cial, ensur­ing authen­tic­ity of infor­ma­tion as well as of its ori­gin is essen­tial. When remotely mon­i­tor­ing a patient’s phys­i­cal activ­ity, ensur­ing cor­rect­ness and authen­tic­ity of the received data is imper­a­tive. This applies not only to the fidelity of the mon­i­tor­ing process, but also to pre­ven­tion mech­a­nisms for fraud in case treat­ments are, for instance, financed by health insur­ance con­tracts [Alshu­rafa et al. 2014]. For intel­li­gent trans­porta­tion sys­tems, the authen­tic­ity of infor­ma­tion and of its source is very impor­tant, since invalid infor­ma­tion may result in seri­ous traf­fic acci­dents and human losses [Zhao et al. 2012]. On the elec­tronic gov­ern­ment front, as an illus­tra­tion, South Korea adopted Secure­Gov, a frame­work of mul­ti­ple secu­rity mech­a­nisms to pre­vent ille­gal use and leak­age of infor­ma­tion, to pre­vent ille­gal mod­i­fi­ca­tion of infor­ma­tion, and to ensure the authen­tic­ity of the user and deliv­ered infor­ma­tion [Choi et al. 2014].

Threats to Authenticity

Fol­low­ing the adopted def­i­n­i­tion for authen­tic­ity, the threats to authen­tic­ity can be grouped into two main groups: those that endan­ger the authen­tic­ity of infor­ma­tion and those that endan­ger the authen­tic­ity of sub­jects. Although one may find var­i­ous exam­ples of threats against authen­tic­ity, some par­a­dig­matic cases will suf­fice to illus­trate attacks on authen­tic­ity.

Despite the con­stant efforts to ensure the authen­tic­ity of infor­ma­tion, through such con­trols such as the use of encryp­tion, dig­i­tal sig­na­tures, check­sums, and trans­ac­tions con­fir­ma­tion, these mech­a­nisms are also the tar­get of coun­ter­feit­ing attempts, some of them suc­cess­ful. The suc­cess of these attacks can be explained, at least in part, by the encryp­tion flaws made pub­lic and by the grow­ing of pri­vate com­put­ing power. Indeed, given the evo­lu­tion of tech­nol­ogy and its increased dis­sem­i­na­tion and ease of access, it is becom­ing less com­pli­cated to bring together tech­no­log­i­cal means with con­sid­er­able capac­ity at rea­son­able cost to exploit those flaws [Pearce et al. 2013].

The authen­tic­ity of a sub­ject may be put into ques­tion in case of imper­son­ation, i.e., when a sub­ject imper­son­ates another, with­out being noticed. As an exam­ple of imper­son­ation we have the 2011 Nor­way attacks. The attacks of July 22, 2011, in Nor­way, con­sisted of an explo­sion in the area of gov­ern­ment build­ings in the cap­i­tal, Oslo, and a shoot­ing that occurred a few hours later on the island of Utøya. The shooter, dressed with a police uni­form [Hager 2011], joined peo­ple on the island on the pre­text of clar­i­fy­ing the attacks in Oslo, jus­ti­fy­ing its pres­ence on the island as being a rou­tine check after the bomb­ing in Oslo [El País 2011]. In this sce­nario, the infor­ma­tionissued” by the shooter was in line with the real­ity, as there was an attack in Oslo to gov­ern­ment build­ings and the camp­site on the island had been orga­nized by the Nor­we­gian Labor Party, so the pres­ence of the police was seen as ordi­nary. So, this infor­ma­tion is authen­tic and its gen­uine­ness and valid­ity could be ver­i­fied. How­ever, its source is not authen­tic, since it is not who it claims to be. Peo­ple on the island authen­ti­cate the sub­ject through some­thing he had with him, the police uni­form.

A sim­i­lar sit­u­a­tion may hap­pen in com­puter sys­tems. For exam­ple, in ter­mi­nals that access dif­fer­ent plat­forms and ser­vices via a Sin­gle Sign-On (SSO) authen­ti­ca­tion pol­icy, the threat of user imper­son­ation is also present. Indeed, Mayer et al. [2014] doc­u­mented five sys­tems that have been the tar­get of suc­cess­ful SSO attacks. In these cases, the authen­ti­ca­tion process was con­sid­ered suc­cess­ful, i.e., the sub­jects were allowed access by the sys­tem, how­ever, they were not authen­tic—although authen­ti­cated they were not really who they claimed to be. More­over, the sub­ject authen­ti­cated by the sys­tem may be authen­tic, but at a later time the sub­ject using the sys­tem (already authen­ti­cated) may not remain the same, it may change to a dif­fer­ent sub­ject, who may not have autho­riza­tion to use the sys­tem. This requires the sys­tem to peri­od­i­cally ver­ify the cur­rent user’s authen­tic­ity, for instance, by check­ing if the user is who it claimed to be by ana­lyz­ing its key stroke dynam­ics [Mon­rose amd Rubin 2000].

Wearable Technology

Since the begin­ning of human civ­i­liza­tion, the desire to cre­ate uten­sils and tools has been present in our mind. Human­ity aims to increase its (par­tic­u­larly sen­sory) capac­ity by devel­op­ing tools as items of cloth­ing or as acces­sories. These arte­facts, such as watches, glasses or oth­ers embed­ded in clothes, that expand human capa­bil­i­ties, were named by Rodrigo [1988] as exter­nal cog­ni­tive pros­the­ses.

Wear­able tech­nol­ogy and devices have emerged with the intrin­sic human wish for pos­ses­sion and use of such devices and with the evo­lu­tion of tech­nol­ogy, in par­tic­u­lar ubiq­ui­tous com­put­ing, i.e., com­puter sys­tems that coop­er­ate with each other trans­par­ently to the user [Cir­ilo 2008]. These devices, some very sophis­ti­cated in terms of tech­nol­ogy, fea­tures and com­put­ing power, have dif­fer­ent func­tions employed in dif­fer­ent areas such as health, enter­tain­ment, fit­ness, pro­duc­tion, among many oth­ers, and are becom­ing increas­ingly com­mon in our daily rou­tines.

Thus, we define wear­able device as an arti­fact—i.e. some­thing made or given shape by man, such as a tool [Collins 2015]—that can be used as an exter­nal pros­the­sis in order to extend the cog­ni­tive and sen­sory abil­ity of the per­son who uses it.

Evolution of Wearable Devices

The first known wear­able device were the eye­glasses [Rhodes n.d.], pro­duced in Italy around 1285-1289 [Glasse­shis­tory n.d.]. How­ever, his­tor­i­cal evi­dence points to the first wear­able devices involv­ing some tech­nol­ogy as being the Nurem­berg egg, a type ofclock-watch”, orna­men­tal small spring-dri­ven clocks made to be worn around the neck, pro­duced in Nurem­berg in the mid-to-late 16th cen­tury (1510) [Dohrn-van Rossum 1996, p. 122; Fan­thorpe and Fan­thorpe 2007, p. 25] and to the Aba­cus ring cre­ated in Qing Dynasty in China, a sil­ver ring dec­o­rated with a func­tional aba­cus, a sort of cal­cu­la­tor that allowed per­for­mance of some math­e­mat­i­cal oper­a­tions, dated around 1700 [Chi­na­cul­ture.org 2010; Zolfaghar­i­fard 2014]. In the years 1980-90, Casio launched and pro­duced the Casio CMD-30B, a wrist­watch with key­board, cal­cu­la­tor, TV remote con­trol and the capac­ity of infor­ma­tion stor­age [CASIO n.d.]. In the 90’s, the Inter­net and mobile phones began to be part of firms’ and ordi­nary indi­vid­u­als’ daily lives. In May 1998, Blue­tooth was launched by Eric­s­son with a par­tic­i­pa­tion of Nokia, IBM and Toshiba [Karls­son and Lugn n.d.]. Nokia sub­se­quently launched mobile phones with sales records over almost the whole world [Stin­son 2015].

Haunted with the so-called mil­len­nium bug, the year 2000 arrived and with it the era of mobil­ity. Devices that hith­erto had no net­work con­nec­tion began to come equipped with wire­less net­work­ing sys­tems via Blue­tooth, WAP, GPRS, or Wi-Fi. All the devices were now (able to be) net­worked.

In 2010, Nike launched the Nike+Sports­bands, a bracelet that by using motion sen­sors gives impor­tant infor­ma­tion about train­ing. Since then, there have been many other fit­ness gad­gets fea­tur­ing GPS func­tion­al­i­ties and data syn­chro­niza­tion with other devices such as mobile phones. It has become com­mon to share in social net­works our trav­elled paths, whether it is run­ning or rid­ing a bicy­cle.

Most devices are nowsmart”, hav­ing the abil­ity to col­lect infor­ma­tion about who we are, where we are and what we do. Our devices are able to com­mu­ni­cate with other devices and have pro­cess­ing capa­bil­i­ties that turn them into pocket com­put­ers. Since 2013, the appear­ance of smart wear­able devices has become more evi­dent: eye­wear (e.g. Google Glass), lifel­og­ging cam­eras (e.g. Nar­ra­tive Clip) and smart­watches (e.g. Apple’s Smart­Watch) illus­trate this evo­lu­tion.

Today, in the era of IoT, we have actual com­put­ers that fit in the palm of our hand, and whose com­plex­ity makes us think about the future. We can eas­ily imag­ine a smart hand or foot or a Blue­tooth shoelace that tight­ens itself. From the sim­ple ana­logue watch and glasses, going through Casio CMD-30B, per­haps the most famous of all, to smart­watches, the evo­lu­tion of such devices has been extra­or­di­nary, as well as the tech­nol­ogy that sup­ports them, such as nan­otech­nol­ogy, wire­less com­mu­ni­ca­tion sys­tems, new tex­tiles fibers, among many oth­ers.

There are many prac­ti­cal appli­ca­tions of today’s wear­able tech­nol­ogy, in numer­ous areas, and in many cases using devices and ear­lier tech­nol­ogy. For exam­ple, RFID tags have seen their applic­a­bil­ity increase with the advent of these new devices. Another exam­ple is FedEx that has equipped many of its deliv­ery vehi­cles with hands-free igni­tion and the vehi­cle secu­rity con­trol sys­tem that is acti­vated by an RFID bracelet that the dri­ver has on his arm [Schell n.d.]. Many sen­sors are now used in cloth­ing that enable us to remotely mon­i­tor the vital signs of out­pa­tients [Hur­ford 2009].

Wear­able devices are also widely used in med­i­cine and in dif­fer­ent sce­nar­ios. In cases of Lennox-Gas­taut syn­drome, a child­hood severe encephalopa­thy, the Vagus Nerve Stim­u­la­tion (VNS) is a less inva­sive treat­ment to con­trol epilep­tic seizures [Jobst 2010]. Wear­able devices, such as bracelets, watches or glasses, can also con­trib­ute to a bet­ter social inter­ac­tion for those who have lim­i­ta­tions in com­mu­ni­cat­ing or social­iz­ing, as in the case of autis­tic indi­vid­u­als [Kirkham amd Green­halgh 2015].

Given the use­ful­ness and poten­tial of wear­able devices, in par­tic­u­lar smart­watches, NASA chal­lenged devel­op­ers to design an app for astro­nauts to use when they are in space [Geier 2005]. We will not know what the future holds, but cer­tainly this stim­u­lat­ing and excit­ing evo­lu­tion will con­tinue.

Icons

There are dif­fer­ent types of wear­able devices that by their func­tion­al­ity and usage stand out from the rest. Below are pre­sented some of those devices, con­sid­ered here as icons.2

The brief review of these icons sug­ges­tively illus­trates the range of areas where such devices can oper­ate and the ver­sa­til­ity of their appli­ca­tions.

Types and Categories of Wearable Devices

There are sev­eral types of wear­able devices. Depend­ing on its work­ing method and tech­nol­ogy employed they can be grouped by types. Table 1 shows major types of wear­able devices fol­low­ing the work by Hur­ford [2009].

Table 1: Wear­able Devices Types

Table 1

Based on the Inno­va­tion World Cup cat­e­gories orga­nized by Wear­able Tech­nolo­gies Con­fer­ence [WTC n.d.], these devices can also be grouped into cat­e­gories accord­ing to their core mar­ket sec­tor, as illus­trated in Table 2.

Table 2: Wear­able Devices Cat­e­gories

Table 2

Tak­ing into con­sid­er­a­tion the types and cat­e­gories of wear­able devices shown in Table 1 and Table 2, respec­tively, the wear­able devices pre­vi­ously con­sid­ered icons may be ranked as dis­played in Table 3.

Table 3: Icon Wear­able Devices Clas­si­fied by Cat­e­gory and Type

Table 3

In a clas­si­fi­ca­tion sys­tem based on cat­e­gories and types, to clas­sify accord­ing to cat­e­gory is a sim­ple task; how­ever, it is dif­fi­cult to fit each device in only one type. For exam­ple, the smart­watchMoto 360” orGoogle Glass” can be put into four dif­fer­ent types, namely assmart com­puter”,data acqui­si­tion & sen­sors”,loca­tion & posi­tion”, anddis­play”. Another exam­ple is thearti­fi­cial pan­creas”, which may assume the typesmed­ical device” anddata acqui­si­tion & sen­sors”. Many of the wear­able devices have a wide range of fea­tures, thus lim­it­ing some wear­able devices clas­si­fi­ca­tion sys­tems and ham­per­ing more spe­cific analy­sis, such as the iden­ti­fi­ca­tion of wear­able devices authen­tic­ity risk lev­els. These lim­i­ta­tions led the authors to pro­pose an authen­tic­ity analy­sis frame­work for wear­able devices, which is described in the next sec­tion.

Authentivity Analysis Framework for Wearable Devices

In order to clas­sify wear­able devices in terms of authen­tic­ity, it is use­ful to first define and under­stand their envi­ron­ment, herein termed as the Wear­able Ecosys­tem.

The Wearable Ecosystem

Com­mu­ni­ca­tion is cer­tainly a vital func­tion for all liv­ing beings, humankind being no excep­tion. Over the years a lot has changed, mov­ing from sim­ple com­mu­ni­ca­tion between peo­ple—Human-to-Human (H2H)—to other forms of com­mu­ni­ca­tion, espe­cially at a dis­tance, such as mail, tele­phone or Inter­net.

There are numer­ous forms and archi­tec­tures for human com­mu­ni­ca­tion sys­tems, from mail to email. With the increas­ing use of mobile phones, the Short Mes­sage Ser­vice (SMS) and the Mul­ti­me­dia Mes­sag­ing Ser­vice (MMS) have become increas­ingly impor­tant data trans­mis­sion mech­a­nisms.

With the evo­lu­tion of tech­nol­ogy as well as the wear­able com­put­ing capa­bil­i­ties and net­works, includ­ing Wi-Fi, it is pos­si­ble to have these devices net­worked, act­ing as net­worked nodes, and per­form­ing func­tions of clients or servers (peer-to-peer).

Given the nature of wear­able tech­nol­ogy, these iso­lated or con­nected devices, in an M2M (Machine-to-Machine) scheme, can be con­sid­ered, in terms of com­mu­ni­ca­tion, as exter­nal fron­tiers of the per­son who uses them with its envi­ron­ment, thereby defin­ing an inter­face between the holder and the out­side.

As its user inter­face, these devices can define two types of com­mu­ni­ca­tion. In an inter­ac­tion between two per­sons, sub­ject A equipped with device Ia (inter­face a) and sub­ject B equipped with device Ib (inter­face b) can estab­lish the fol­low­ing kinds of com­mu­ni­ca­tions, in a nota­tion à la Uni­fied Mod­el­ing Lan­guage (UML) and with Cn indi­cat­ing the time sequence of com­mu­ni­ca­tion:

Figure 1

Fig­ure 1: Inter­face-to-Human Com­mu­ni­ca­tion Sce­nar­ios

Figure 2

Fig­ure 2: Inter­face-to-Inter­face Com­mu­ni­ca­tion Sce­nario

In both sce­nar­ios, I2H and I2I, there is no direct con­tact between sub­jects: all com­mu­ni­ca­tions are medi­ated. Whereas in I2H sce­nario this medi­a­tion is par­tial, since sub­ject B is not using his inter­face, and com­mu­ni­cates directly with A’s inter­face (con­nec­tion rep­re­sented by C3 in Fig­ure 1b), for B there is no bor­der notion. In I2I sce­nario, the com­mu­ni­ca­tions of A to B and of B to A are com­pletely medi­ated, i.e., no sub­ject sends a mes­sage to out­side his bor­der, directly. Each one com­mu­ni­cates to the out­side using its inter­faces.

As mem­bers of this Wear­able Ecosys­tem, peo­ple canwear” net­work inter­faces mak­ing them nodes of a new net­work where peo­ple and things are all con­nected. The intra- and inter­con­nec­tions between peo­ple, peo­ple as wear­able devices users, peo­ple’s devices and third par­ties’ devices forms a new mesh, expand­ing their com­mu­ni­ca­tions net­work and defin­ing a new par­a­digm in per­sonal and inter­per­sonal com­mu­ni­ca­tions. This new giant mesh may be called by the emerg­ing termInter­net of Peo­ple” (IoP), as illus­trated in Fig­ure 3.

Figure 3

Fig­ure 3: Inter­net of Peo­ple

Based on nat­ural ecosys­tem def­i­n­i­tion—a sys­tem formed by the inter­ac­tion of a com­mu­nity of organ­isms with its envi­ron­ment [Ecosys­tem n.d.], a Wear­able Ecosys­tem could be defined as a sys­tem formed by organ­isms: the humans—as users—and their wear­able devices—as arte­facts. In this ecosys­tem all inter­ac­tions are medi­ated by arte­facts.

This new ecosys­tem presents new chal­lenges to civ­i­liza­tion as a whole. Tak­ing into account the devel­op­ment of machine automa­tion over the last three cen­turies, Dav­en­port and Kirby [2015] orga­nized that evo­lu­tion into three eras:

In Era Three of machine automa­tion evo­lu­tion, wear­able devices are in a priv­i­leged posi­tion com­pared to any other machine, given their prox­im­ity to their users. These devices can col­lect users’ behav­iors, reac­tions and rou­tines more accu­rately than any other machine, because they do it in a trans­par­ent and non-inva­sive way, that is, with­out the user being con­di­tioned [Xu et al. 2008]. This prox­im­ity, com­bined with their abil­ity to col­lect and process data, makes these devices able to feed and teach their smart sys­tems, includ­ing deci­sion-mak­ing sys­tems.

With all the infor­ma­tion these devices can col­lect and the advances in arti­fi­cial intel­li­gence (AI), we may have, at some point, wear­able inter­faces that assum­ing the role of a per­son’s exter­nal inter­face can eas­ily pre­tend to be that per­son and be able to pass the Tur­ing test. The Tur­ing test [Tur­ing 1950] assesses the abil­ity of a machine to demon­strate intel­li­gence capa­bil­i­ties equiv­a­lent to a human, or to be indis­tin­guish­able from a human. With the evo­lu­tion of wear­able inter­faces, espe­cially in terms of their com­pu­ta­tional capa­bil­ity and arti­fi­cial intel­li­gence capac­ity, they may imi­tate their user, mak­inggoing through the user”, as their inter­face with another sub­ject or inter­face.

Should the Tur­ing test be changed? So far this test uses two types of play­ers: the man and the machine. In terms of test, we have the duel between human mind and machine. With the pres­ence of wear­able intel­li­gent inter­faces, we now have three types of pos­si­ble play­ers:

In the hybrid human-machine sce­nario, where machine rep­re­sents the set of wear­able inter­faces hold­ing the capa­bil­ity of com­mu­ni­ca­tion con­trolled by AI algo­rithms, it may not be pos­si­ble to dis­tin­guish what belongs to the human and to his or her devices. Will intel­li­gent wear­able inter­faces be able to pass the Tur­ing test and so imper­son­ate a human? If so, we may con­sider a new kind of imper­son­ation, thus putting at risk the guar­an­tee of its authen­tic­ity: it is not an indi­vid­ual who is imper­son­at­ing another, but it is one of their fea­tures that is imper­son­at­ing him or her.

Nowa­days, it is often hard to ensure authen­tic­ity despite the exist­ing con­trols. Even in peer-to-peer con­nec­tions, attacks such asman-in-the-mid­dle” (MITM) are some­times suc­cess­ful, despite the exist­ing mech­a­nisms, such as the use of dig­i­tal sig­na­tures and pub­lic/pri­vate keys.

The priv­i­leged posi­tion of wear­able devices in rela­tion to their users makes it easy for them to intrude in the com­mu­ni­ca­tions they medi­ate, pur­su­ing attacks such as MITM, par­tic­u­larly in com­mu­ni­ca­tions iden­ti­fied with C2 and C3 in Fig­ures 4 and 5. This priv­i­leged posi­tion, and the access to their users’ per­sonal infor­ma­tion—allow­ing them to set a pro­file of their users and thus behave sim­i­larly—can also allow them to go through one, or more, real actors in com­mu­ni­ca­tion (i.e., via imper­son­ation). In this sce­nario, it might make sense to also con­sider a new type of attack, namely inter­face-as-a-per­son.

In face of imper­son­ation by a wear­able device (such as those shown by Ia and Ib in Fig­ures 4 and 5), manip­u­la­tion of real­ity may take place, that is, infor­ma­tion com­ing from the out­side, or sent to the out­side, may be manip­u­lated in a way to induce its recep­tor to avir­tu­al­ized” and per­haps fal­la­cious real­ity. All these new ways of inter­ac­tion pose severe chal­lenges to the authen­tic­ity of infor­ma­tion and sub­jects.

Figure 4

Fig­ure 4: Poten­tial Attacks (iden­ti­fied by bolt sig­nal) under I2H Com­mu­ni­ca­tions

Figure 5

Fig­ure 5: Poten­tial Attacks (iden­ti­fied by bolt sig­nal) under I2I Com­mu­ni­ca­tions

The Framework for Analysis

In order to be able to deter­mine the authen­tic­ity risk of a wear­able device, tak­ing into con­sid­er­a­tion its capa­bil­i­ties and fea­tures, the authors pro­pose a clas­si­fi­ca­tion frame­work based on a scale of 1 (20) to 4095 (212-1) points, in which the value 1 is the low­est authen­tic­ity risk level of a device and 4095 the high­est pos­si­ble value for that risk.

The rat­ing sys­tem (cf. Table 4) is a sys­tem based in binary 12 bits (1 + 11 bits), in which each bit indi­cates whether a fea­ture is present or not in the device under study. This clas­si­fi­ca­tion sys­tem is based in a table with the most impor­tant fea­tures of devices sorted by their degrees of risk of not ensur­ing the authen­tic­ity prin­ci­ple in descend­ing order, from left to right (11 bits). Hence, the most prob­lem­atic fea­tures to authen­tic­ity, for exam­ple, the com­pu­ta­tional capa­bil­i­ties, occupy the most sig­nif­i­cant bits (to the left). The func­tion­al­i­ties con­vey­ing less risk, such as sim­ply read­ing data, are located in the lower bits (to the right). The 12th bit (n = 11), the most sig­nif­i­cant, is a spe­cial bit and it intends to iden­tify whether the device is crit­i­cal in the con­text in which it oper­ates.

Table 4: Rat­ing Sys­tem Used in the Frame­work

Table 4

The authen­tic­ity risk level—Rauthen­tic­ity—for a spe­cific wear­able device can be deter­mined by com­put­ing Equa­tion 1:

Equation 1

where bn rep­re­sents the value (zero or one) of the bit at posi­tion n (0-11).

The use of binary rep­re­sen­ta­tion and the above for­mula allows two devices to have the same clas­si­fi­ca­tion, if and only if, they have the same func­tions, so the com­par­i­son between devices is lin­ear.

As an exam­ple, Fig­ure 6 shows the clas­si­fi­ca­tion for two non-crit­i­cal devices (b11 = 0), using the sug­gested rat­ing sys­tem. The first is a tem­per­a­ture sen­sor and the sec­ond is a smart­watch.

Figure 6

Fig­ure 6: Non-crit­i­cal Wear­able Device Clas­si­fi­ca­tion Exam­ples

Of the two non-crit­i­cal devices used for illus­tra­tion, the one with a lower risk level is the sen­sor because infor­ma­tion that can be col­lected is just the tem­per­a­ture and it has no crit­i­cal role. The smart­watch, in spite of not play­ing a crit­i­cal role, has a higher risk level because it has access to a large set of its user’s infor­ma­tion that can manip­u­late and dis­sem­i­nate with­out inter­ven­tion or even the knowl­edge of its
user.

Sim­pli­fy­ing the clas­si­fi­ca­tion sys­tem and its scale, the risk can be trans­lated into the 12 classes, in human friendly for­mat, depicted in Fig­ure 7. To pro­mote the intu­itive under­stand­ing about the classes range, code lev­els, from D+ to A-, are employed (col­ors can also be used to flag classes—red for the higher lev­els of risk and green for lower lev­els, with orange and yel­low being used in the inter­me­di­ate classes). The most sig­nif­i­cant non-zero bit of the rat­ing of a device defines the device’s risk class.

Figure 7

Fig­ure 7: Authen­tic­ity Risk Clas­si­fi­ca­tion Lev­els

The clas­si­fi­ca­tion level main classes (A to D) have the fol­low­ing heuris­tic mean­ings:

Icon Devices Analyses

Based on the sim­pli­fied clas­si­fi­ca­tion sys­tem, Fig­ure 8 shows the authen­tic­ity risk level and class for the wear­able devices pre­vi­ously pre­sented asicons”.

Figure 8

Fig­ure 8: Icons Clas­si­fi­ca­tion

Of the icon wear­able devices pre­sented, the sim­plest is the weld­ing mask (Chameleon 4V+), given that in terms of tech­nol­ogy only has light sen­sors and a dis­play that con­trols the bright­ness that comes to the user. How­ever, it plays a crit­i­cal role given the pro­tec­tion it pro­vides to its users, there­fore, assur­ing that the com­po­nents of this device (hard­ware) are authen­tic is very impor­tant, oth­er­wise it may com­pro­mise its oper­a­tion, which means, as a result, that its class is D+. Also rated as D+, even though it has a risk value higher than the first, is the arti­fi­cial pan­creas, given its vital role as well as the tech­nol­ogy it employs. In this case, the risk is not only related to its hard­ware but also to its firmware and soft­ware.

The smart­watch (Moto 360), the smart glasses (Google Glass) and the cam­era (Nar­ra­tive Clip) are the most pop­u­lar of the list, and pos­si­bly the most used due to their fea­tures and sim­i­lar man­ner of use. They have an oper­at­ing sys­tem and enable their pro­gram­ming though the instal­la­tion of appli­ca­tions. In this clas­si­fi­ca­tion sys­tem they have class D. In spite of not play­ing a crit­i­cal role, they have access to a large set of their users’ infor­ma­tion that they can manip­u­late and dis­sem­i­nate with­out inter­ven­tion or even knowl­edge of their users.

The device with the low­est risk level is the fit­ness bracelet (Mi Band) because, despite being close to its user 24 hours per day, the infor­ma­tion that can be col­lected is very lim­ited (e.g., it is con­fined to the move­ments and the car­diac activ­ity of the wearer). Infor­ma­tion dif­fu­sion is also lim­ited, since the device only com­mu­ni­cates with other devices (not wear­able) through a pro­vided appli­ca­tion. In this case, the risk can be con­sid­er­able if the appli­ca­tion that allows syn­chro­niza­tion and infor­ma­tion col­lec­tion does not com­ply with an appro­pri­ate level of secu­rity.

Preventive Factor

Based on the rat­ings of the devices ana­lyzed, we observe that, regard­less of the con­text (crit­i­cal or not) in which they oper­ate, most wear­able devices are in class D, i.e., the high­est level of risk for authen­tic­ity. If we asso­ciate to this high poten­tial risk their priv­i­leged posi­tion in rela­tion to their users, the infor­ma­tion they have access to and their com­pu­ta­tional and com­mu­ni­ca­tion capa­bil­i­ties, we con­clude these types of devices may pose a real threat to authen­tic­ity. There­fore, pre­ven­tive mea­sures should be defined in order to mit­i­gate their poten­tial risk.

Con­sid­er­ing that the appli­ca­tion of pre­ven­tive mea­sures tar­gets a par­tic­u­lar fea­ture of the device (cor­re­spond­ing to the index n), which allows the reduc­tion of the poten­tial risk of the device, and that may be defined by the Pn coef­fi­cient, its impact can then be included in authen­tic­ity risk cal­cu­la­tion for­mula shown pre­vi­ously in Equa­tion 1, as given by Equa­tion 2:

Equation 2

where bn rep­re­sents the bit value (zero or one) in the posi­tion n (0-11) and Pn the value of the pre­ven­tive coef­fi­cient cor­re­spon­dent to n. The pre­ven­tive coef­fi­cient is a heuris­tic value and can take val­ues from N , i.e., all nat­ural num­bers (1,2,3, ...). The value 1 for Pn coef­fi­cient means that no pre­ven­tive action was applied.

As an exam­ple, con­sid­er­ing a crit­i­cal sen­sor, if we use two sen­sors in par­al­lel (using the mean value as final value) the esti­mate pre­ven­tive coef­fi­cient could be 2 for the cor­re­spon­dent index (n=11). For the read fea­ture (n=0) if we use sen­sors with dif­fer­ent ser­ial num­bers the prob­a­bil­ity of fail­ure of both sen­sors may be reduced, in this case the esti­mated pre­ven­tive coef­fi­cient could be 2.

Cer­tainly, it may not be fea­si­ble to remove fea­tures from these kind of devices or to com­pletely elim­i­nate the risk for authen­tic­ity. How­ever, it may be pos­si­ble to mit­i­gate this risk. The pre­ven­tive fac­tor arises in this frame­work as a mech­a­nism to rep­re­sent the risk reduc­tion for authen­tic­ity asso­ci­ated with each fea­ture of the device (based on its respec­tive pre­ven­tive coef­fi­cient) and, con­comi­tantly, to reflect this risk mit­i­ga­tion in the device’s authen­tic­ity risk value. This way, if a wear­able device was designed and devel­oped to pro­mote authen­tic­ity, the rat­ing sys­tem of the pro­posed frame­work will take it into account, dis­tin­guish­ing the risk level of the device from other devices with sim­i­lar fea­tures but lack of pre­ven­tive mech­a­nisms.

Solutions and Recommendations

Based on the result of the clas­si­fi­ca­tion sys­tem and its appli­ca­tion, one rec­om­men­da­tion and a pos­si­ble solu­tion are pro­posed to mit­i­gate the risk for authen­tic­ity on wear­able devices.

The clas­si­fi­ca­tion sys­tem may be used as a mech­a­nism to inform end users about the risks that are present when using a spe­cific device. By includ­ing the respec­tive class in the device’s pack­age, this infor­ma­tion can be used as an ele­ment of choice between devices.

A pos­si­ble con­trol to mit­i­gate the risk for authen­tic­ity on these devices is Hier­ar­chi­cal Dig­i­tal Sig­na­tures.

Dig­i­tal sig­na­tures can pro­vide the assur­ance and evi­dence of the iden­tity and prove­nance of the author, i.e., the sig­na­tory [Sax­ena and Chaud­hari 2012]. How­ever, given the pos­si­ble types of com­mu­ni­ca­tion that may occur in the pres­ence of increas­ingly sophis­ti­cated wear­able devices, as pre­vi­ously dis­cussed and out­lined in Fig­ures 4 and 5, the inclu­sion of the tra­di­tional dig­i­tal sig­na­ture, as we know it, may not be suf­fi­cient.

These devices can imper­son­ate their users by send­ing mes­sages to them­selves or by manip­u­lat­ing users’ mes­sages. It is argued that to ensure authen­tic­ity of the author­ship of each of the exchanged mes­sages, they must be dig­i­tally signed with the sig­na­ture of the authen­tic author. A mes­sage sent by a device (even on the advice of its user) must be signed with its sig­na­ture, and never with the user’s sig­na­ture.

Inci­den­tally, a device should not sign by its user (or owner) in order to pre­vent imper­son­ation attacks. Sign­ing a device must guar­an­tee the iden­ti­fi­ca­tion of his author­ship and of its user. In this respect what is pro­posed is a new hier­ar­chi­cal scheme for dig­i­tal sig­na­tures. The hier­ar­chy of dig­i­tal sig­na­tures may be rep­re­sented in a dia­gram of sig­na­tures and their exten­sions (cf. Fig­ure 9) in a sim­i­lar man­ner as a class dia­gram, from object ori­ented pro­gram­ming par­a­digms.

Figure 9

Fig­ure 9: Pro­posed Sig­na­tures Hier­ar­chy

This new scheme allows the receiver of a mes­sage to iden­tify the author through its sig­na­ture and if it was inter­me­di­ated by any inter­face, iden­ti­fy­ing which inter­face was used and who owns it. The human author can­not repu­di­ate the author­ship of a par­tic­u­lar mes­sage, if it was ini­tially signed with its sig­na­ture.

In order to illus­trate how this mech­a­nism works, it is nec­es­sary to recall the com­mu­ni­ca­tion sce­nario of Fig­ure 4. In the sce­nario pre­sented in Fig­ure 4a, sub­ject A starts a com­mu­ni­ca­tion to sub­ject B; this com­mu­ni­ca­tion will be medi­ated by Ia. The mes­sage is then signed by sub­ject A and trans­mit­ted to A’s inter­face. This inter­face will sign the mes­sage with its dig­i­tal sig­na­ture (this sig­na­ture is actu­ally an exten­sion to the sig­na­ture of its user). Sub­ject B receives the mes­sage and can ver­ify the author­ship of the source (in this case Ia) as well as that it is an inter­me­di­ated com­mu­ni­ca­tion by an inter­face, since there is infor­ma­tion about a user (sub­ject A). These two lev­els of mes­sage sig­na­ture pre­vent imper­son­ation, i.e., the IaaP attack, and ensure the author­ship of the orig­i­nal author (sub­ject A), and the inter­faces that medi­ate the com­mu­ni­ca­tion (Ia). In the sce­nario illus­trated in Fig­ure 4b, sub­ject B in response to sub­ject A (com­mu­ni­ca­tion C3) uses its own sig­na­ture to sign the mes­sage and to send it to sub­ject A, how­ever, this com­mu­ni­ca­tion will be medi­ated by A’s inter­face (Ia). Inter­face Ia receives and dis­plays a mes­sage to the sub­ject who is using it. This sub­ject A can then val­i­date the author­ship of the mes­sage and ver­ify there are no inter­me­di­aries (sub­ject B has no inter­face), ensur­ing this way the authen­tic­ity of the mes­sage and its author­ship.

This new mech­a­nism will bring new chal­lenges to man­u­fac­tur­ers of this type of device, as it will be nec­es­sary to develop a secu­rity ker­nel respon­si­ble for this new fea­ture.

One pos­si­ble solu­tion would be to imple­ment a Ref­er­ence Mon­i­tor (cf. [Rushby 1981]). A ref­er­ence mon­i­tor will be the most crit­i­cal part of this new secu­rity ker­nel, con­trol­ling access to objects and act­ing as a medi­a­tor of trans­ac­tions with the sys­tem. This com­po­nent must be:

In this case, the ref­er­ence mon­i­tor can be con­ceived as a dig­i­tal sig­na­tures man­ager, stor­ing the dig­i­tal sig­na­tures of device and owner, and ensur­ing their authen­tic­ity in terms of source and use.

Future Research Directions

The focus of this work is on the chal­lenges of wear­able tech­nolo­gies from the point of view of authen­tic­ity as a secu­rity prin­ci­ple. The pro­posed clas­si­fi­ca­tion sys­tem is applic­a­ble for clas­si­fy­ing the risk for authen­tic­ity of infor­ma­tion and sub­jects. Sub­se­quent research work could extend the analy­sis to other secu­rity prin­ci­ples, for instance integrity or avail­abil­ity of infor­ma­tion. The hier­ar­chi­cal archi­tec­ture for dig­i­tal sig­na­tures could also be eval­u­ated from the point of view of other secu­rity prin­ci­ples.

It will also be inter­est­ing to involve and con­sult peo­ple, as users of these tech­nolo­gies. A sur­vey could help to under­stand their level of knowl­edge on this topic, and to iden­tify their aware­ness about the risks under­ly­ing these tech­nolo­gies. Based on their opin­ions as users or future users of these tech­nolo­gies, it would be impor­tant to val­i­date the util­ity of the clas­si­fi­ca­tion sys­tem pro­posed in this chap­ter, in order to iden­tify pos­si­ble require­ments, changes or new direc­tions to fol­low, so that it may be an aid in cre­at­ing a cul­ture of secu­rity. Finally, it would be inter­est­ing to try to con­duct work aimed to nor­mal­ize the scale for rat­ing the pre­ven­tive fac­tor, so that its cur­rent heuris­tic nature could be replaced by a sys­tem­atic qual­ity.

Conclusion

From the evo­lu­tion of wear­able devices, not only have new devices resulted with many new fea­tures that may be impor­tant aids in our activ­i­ties, but also a num­ber of new chal­lenges to the infor­ma­tion secu­rity field, such as the ones con­cern­ing the assur­ance of authen­tic­ity.

Given the capa­bil­i­ties inte­grated in wear­able devices, and their wide range of appli­ca­tions, it becomes nec­es­sary to take some pre­cau­tions in their use. The frame­work pre­sented in this chap­ter intends to alert and sen­si­tize users of this tech­nol­ogy to the poten­tial risks and dan­gers to which they may be sub­jected, pro­vid­ing them with a clas­si­fi­ca­tion tool for eval­u­at­ing and com­par­ing this type of devices. It is not intended to call into ques­tion the use­ful­ness of this tech­nol­ogy, but to make their poten­tial users aware of the asso­ci­ated authen­tic­ity risks and of the effec­tive­ness of pre­ven­tive mea­sures that may mit­i­gate the cor­re­spond­ing dan­gers.

Based on usage sce­nar­ios of wear­able devices, poten­tial attacks that can be car­ried out, putting into ques­tion the authen­tic­ity of the infor­ma­tion involved or the authen­tic­ity of the sender of the mes­sage have been dis­cussed. In order to pro­mote the authen­tic­ity, both of infor­ma­tion and sub­jects involved in the inter­ac­tions, a mech­a­nism of hier­ar­chi­cal dig­i­tal sig­na­tures was sug­gested.

Cer­tainly, the evo­lu­tion of these devices will not stop here and new chal­lenges will emerge, an expected sit­u­a­tion that requires, from now on, the man­age­ment of wear­able tech­nolo­gies secu­rity issues.

Discussion Points

Questions

References

Key Terms and Definitions

Arti­fact: Some­thing made or given shape by man, such as a tool that can be used as exter­nal pros­the­sis in order to extend the cog­ni­tive and sen­sory abil­ity of the per­son who uses it.

Imper­son­ate: Abil­ity of a per­son, process or thing to assume a char­ac­ter or appear­ance of another one, nor­mally for fraud­u­lent pro­poses.

Inter­face: A wear­able device that assumes, in a com­mu­ni­ca­tion sce­nario, the bridge between a per­son (that is using this device) and the exter­nal enti­ties.

Inter­face-as-a-Per­son (IaaP): The priv­i­leged posi­tion of the wear­able devices in rela­tion to their users, makes it easy for them to intrude in the com­mu­ni­ca­tions they medi­ate, and com­bined with the access to their users’ per­sonal infor­ma­tion—allow­ing them to set a pro­file of them and thus behave sim­i­larly—can also allow them to go through one, or more, real actors in com­mu­ni­ca­tion.

Inter­face-to-Human (I2H): In a com­mu­ni­ca­tion between sub­jects A and B, the com­mu­ni­ca­tion begins in sub­ject A but the mes­sage is issued by his/her inter­face directly to sub­ject B. When B responds to A, the response will be deliv­ered and received by A’s inter­face. Sub­ject A will receive B’s mes­sage directly from A’s inter­face, and not from B.

Inter­face-to-Inter­face (I2I): In a com­mu­ni­ca­tion between sub­jects A and B, a mes­sage from A is sent via A’s inter­face that com­mu­ni­cates with sub­ject B via its inter­face, and vice versa. The com­mu­ni­ca­tion can occur in both direc­tions, but each sub­ject only com­mu­ni­cates directly with his own inter­face.

Inter­net of Peo­ple (IoP): A new net­work where peo­ple and things are all con­nected. Immersed in theWear­able Ecosys­tem”, peo­ple canwear” net­work inter­faces mak­ing them nodes of this new giant mesh.

Wear­able Ecosys­tem: A sys­tem formed by organ­isms—the humans, as own­ers—and their wear­able devices, as arte­facts. In this ecosys­tem all com­mu­ni­ca­tions are medi­ated by arte­facts—as inter­faces, assum­ing two schemas: I2H and I2I.

Endnotes

1 In this work we chose the word prin­ci­ple over alter­na­tives, such as prop­erty or require­ment, to refer to the con­cepts of con­fi­den­tial­ity, integrity, avail­abil­ity, etc., in order to con­vey the sense of guid­ing foun­da­tion for the efforts to pro­tect infor­ma­tion and infor­ma­tion–related assets.

2 The infor­ma­tion about these devices was retrieved from respec­tive project/man­u­fac­turer offi­cial web­sites.