Orig­i­nal source pub­li­ca­tion: Alara­biat, A., D. Soares, L. Fer­reira and F. de Sá-Soares (2018). Ana­lyz­ing e-Gov­er­nance Assess­ment Ini­tia­tives: An Exploratory Study. Pro­ceed­ings of the 19th Inter­na­tional Con­fer­ence on Dig­i­tal Gov­ern­ment Research—DG.O 2018, Delft (The Nether­lands).
The final pub­li­ca­tion is avail­able here.

Ana­lyz­ing E-Gov­er­nance Assess­ment Ini­tia­tives: An Exploratory Study

Ayman Alara­biat,a Del­fina Soares,b Luis Fer­reirac and Fil­ipe de Sá-Soaresc

a Oper­at­ing Unit on Pol­icy-Dri­ven Elec­tronic Gov­er­nance, United Nations Uni­ver­sity and Cen­tro ALGO­RITMI/Uni­ver­sity of Minho, Guimarães, Por­tu­gal
b Oper­at­ing Unit on Pol­icy-Dri­ven Elec­tronic Gov­er­nance, United Nations Uni­ver­sity, Guimarães, Por­tu­gal
c Cen­tro ALGO­RITMI, Uni­ver­sity of Minho, Guimarães, Por­tu­gal

Abstract

This paper presents an exploratory study aimed at iden­ti­fy­ing, explor­ing, and ana­lyz­ing cur­rent EGOV assess­ment ini­tia­tives. We do so based on data obtained from a desk­top research and from a world­wide ques­tion­naire directed to the 193 coun­tries that are part of the list used by the Sta­tis­tics Divi­sion of the United Nations Depart­ment of Eco­nomic and Social Affairs (UNDESA). The study analy­ses 12 EGOV assess­ment ini­tia­tives: a) seven of them are inter­na­tional/regional EGOV assess­ment ini­tia­tives per­formed by the United Nations (UN), Euro­pean Union (EU), Waseda-IAC, Organ­i­sa­tion for Eco­nomic Co-oper­a­tion and Devel­op­ment (OECD), World Bank (WB), WWW Foun­da­tion, and Open Knowl­edge Net­work (OKN); b) five of them are coun­try-level EGOV assess­ment ini­tia­tives per­formed by Nor­way, Ger­many, India, Saudi Ara­bia, and the United Arab Emi­rates. Fur­ther, the study pro­vides gen­eral results obtained from a ques­tion­naire with par­tic­i­pa­tion from 18 coun­tries: Afghanistan, Angola, Brazil, Cabo Verde, Den­mark, Esto­nia, Fin­land, Ger­many, Ghana, Ire­land, Latvia, the Nether­lands, Nor­way, Oman, Pak­istan, the Philip­pines, Por­tu­gal, and Slove­nia. The find­ings show that there is no short­age of inter­est in assess­ing EGOV ini­tia­tives. How­ever, the sup­ply side of EGOV ini­tia­tives is the dom­i­nant per­spec­tive being assessed, par­tic­u­larly by regional and inter­na­tional orga­ni­za­tions. While there is an increas­ing inter­est in assess­ing the users’ per­spec­tive (demand side) by indi­vid­ual coun­tries, such attempts still seem to be at an early stage. Addi­tion­ally, the actual use and impact of var­i­ous EGOV ser­vices and activ­i­ties are rarely well iden­ti­fied and mea­sured. This study rep­re­sents a step­ping stone for devel­op­ing instru­ments for assess­ing EGOV ini­tia­tives in future works. For the cur­rent stage, the study presents sev­eral gen­eral sug­ges­tions to be con­sid­ered dur­ing the assess­ment process.

Key­words: e-Gov­er­nance; e-Gov­ern­ment; Assess­ment; Eval­u­a­tion

1. Introduction

Dig­i­tal gov­ern­ment can be under­stood as the deploy­ment of Infor­ma­tion and Com­mu­ni­ca­tions Tech­nol­ogy (ICT) appli­ca­tion and solu­tions with the aim of trans­for­ma­tion and mod­ern­iza­tion var­i­ous gov­ern­ment func­tions, activ­i­ties, poli­cies, and inter­ac­tions [Jawowski 2015]. At the heart of such trans­for­ma­tion, elec­tronic gov­er­nance (EGOV) has been intro­duced as the strate­gic use of ICT to sup­port gov­er­nance processes in which a gov­ern­ment stream­lines its oper­a­tions, improves its admin­is­tra­tive effi­ciency, offers suit­able elec­tronic ser­vices, and reforms its rela­tion­ships with var­i­ous stake­hold­ers to increase integrity, trans­parency, and account­abil­ity of said gov­ern­ment [Cole­man 2008; Zam­brano 2008].

EGOV is assumed to be one of the key enablers to advance the efforts of coun­tries towards the imple­men­ta­tion of the United Nations 2030 Agenda and its 17 Sus­tain­able Devel­op­ment Goals (SDGs) [UNDP 2013; United Nations 2016]. Glob­ally, there is a grow­ing inter­est to imple­ment EGOV ini­tia­tives as a con­tin­u­ous process towards help­ing coun­tries devel­op­ing eco­nomic growth, polit­i­cal sta­bil­ity, social coher­ence and jus­tice, and envi­ron­ment pro­tec­tion [United Nations 2016; Estevez and Janowski 2013]. EGOV can, in fact, rein­force pre­vi­ous efforts through: (i) improve­ment of gov­ern­ment per­for­mance and gen­er­at­ing more cost-effec­tive gov­ern­ment oper­a­tions; (ii) increase effi­ciency and effec­tive­ness of elec­tronic pub­lic ser­vice deliv­ery; (iii) mit­i­gate infor­ma­tion asym­me­try in the soci­ety; (iv) strengthen gov­ern­ment inter­ac­tion and rela­tion­ship with sev­eral stake­hold­ers; (v) enhance cit­i­zens’ par­tic­i­pa­tion in deci­sion-mak­ing process; and (vi) pro­mote open, trans­par­ent, and account­able gov­ern­ment and soci­eties [Cole­man 2008; Palvia and Sharma 2007; UNDP 2013].

Despite these high expec­ta­tions asso­ci­ated with EGOV, there is still a debate regard­ing the true suc­cess and impacts of EGOV ini­tia­tives, as the evi­dence of the exis­tent impacts is some­what mixed and lim­ited [Goel et al. 2012; Suri and Sushil 2017; United Nations 2016]. It could be argued that gov­ern­ments face the issue of assess­ing and mon­i­tor­ing EGOV ini­tia­tives that are effec­tively able of mea­sur­ing progress in the dif­fer­ent aspects of EGOV [Goel et al. 2012; Gupta et al. 2017]. Accord­ingly, a real­is­tic assess­ment of EGOV ini­tia­tives seems quite impor­tant.

To do a real­is­tic EGOV assess­ment is not, how­ever, a sim­ple and lin­ear task. What is to be mea­sured and assessed? How to mea­sure and assess? When to mea­sure and assess? These are fun­da­men­tal ques­tions that chal­lenge those involved in EGOV mea­sure­ment, assess­ment, and mon­i­tor­ing ini­tia­tives. Even more chal­leng­ing is the cre­ation of assess­ment instru­ments that could be applied in dif­fer­ent coun­tries, thus allow­ing for a cross-coun­try com­par­i­son of EGOV devel­op­ment. The exis­tence of a set of stan­dard instru­ments for assess­ing dif­fer­ent aspects of EGOV, which would be applic­a­ble in all coun­tries, could be of extremely value for bol­ster­ing EGOV ini­tia­tives around the world. To help in this process of defin­ing EGOV assess­ment instru­ments and con­duct EGOV assess­ment, one fun­da­men­tal ini­tial step would be to have a com­pre­hen­sive view of the cur­rent sta­tus of EGOV assess­ment or, in other words, to know which EGOV assess­ment ini­tia­tives are already been con­ducted and how they are con­ducted, either at inter­na­tional, regional or national level.

There­fore, the main pur­pose of this study is to iden­tify and explore exist­ing EGOV assess­ment ini­tia­tives. Our approach is thus dri­ven by the fol­low­ing research ques­tion: How is EGOV assess­ment being con­ducted? The study aims at tack­ling this research ques­tion by iden­ti­fy­ing and ana­lyz­ing EGOV assess­ment ini­tia­tives per­formed in five indi­vid­ual coun­tries and seven regional/inter­na­tional EGOV assess­ment ini­tia­tives per­formed by dif­fer­ent orga­ni­za­tions that have been found through search­ing the Inter­net. In addi­tion, the study also pro­vides the results of a world­wide sur­vey con­cern­ing EGOV assess­ment ini­tia­tives that was per­formed to com­ple­ment the Inter­net search­ing.

The paper is struc­tured as fol­lows: Sec­tion 2 presents the scope and ratio­nal­ity of the study; Sec­tion 3 describes how the study was con­ducted; Sec­tion 4 exam­ines seven inter­na­tional and regional EGOV eval­u­a­tion ini­tia­tives; Sec­tion 5 explores how EGOV assess­ment is per­ceived and con­ducted at coun­try level by pre­sent­ing five national EGOV assess­ment ini­tia­tives and sum­ma­riz­ing the gen­eral find­ings of a world­wide sur­vey on National EGOV Assess­ment Ini­tia­tives directed to 193 coun­tries; Sec­tion 6 dis­cusses sev­eral gen­eral sug­ges­tions dis­tilled from the inter­na­tional, regional, and local analy­sis con­ducted in pre­vi­ous sec­tions; finally, Sec­tion 7 con­cludes the paper, under­lines some study lim­i­ta­tions, and advances future research steps.

2. Scope and Rationality of the Study

Mea­sure­ment, assess­ment, and mon­i­tor­ing seem to be fun­da­men­tal activ­i­ties in any con­text of activ­ity.
Ratio­nally,if you can­not mea­sure it, you can­not improve it”. In fact, EGOV assess­ment ini­tia­tives can help and allow gov­ern­ments to mea­sure the progress and achieve­ment of EGOV ini­tia­tives, iden­tify strengths to be sup­ported, iden­tify prob­lems or short­com­ings to pro­pose appro­pri­ate courses of action, and chart prospec­tive direc­tions and pri­or­i­ties [Backus 2002; Palvia and Sharma 2007; Pot­nis 2010; UNDP 2013].

Rea­son­ably, to fol­low up on EGOV progress, sys­tem­atic assess­ment processes are needed, includ­ing clear frame­works, pro­ce­dures, and spe­cific indi­ca­tors. A lack of con­sis­tent and holis­tic frame­works includ­ing clear indi­ca­tors and mea­sures for assess­ing EGOV ini­tia­tives obstructs agree­ment on how best to mea­sure and assess EGOV ini­tia­tives [Cox 2014; Janowski 2015; Sakow­icz 2003], which in the end hin­ders dis­till­ing lessons learned, inte­grat­ing course cor­rec­tions, and shar­ing best prac­tices for EGOV devel­op­ment between coun­tries. Fur­ther­more, the impor­tance of estab­lish­ing coher­ent EGOV assess­ment lays in the assump­tion that dif­fer­ent mea­sure­ment tools used may pro­vide inac­cu­rate assess­ment. So, dif­fer­ent mea­sure­ment tools will not allow for inter­na­tional com­par­isons to be made, as well as pre­vent com­pa­ra­bil­ity over time within coun­tries. Con­se­quently, there is an oppor­tu­nity for those who are inter­ested in EGOV ini­tia­tives to advance global efforts and to sig­nif­i­cantly con­trib­ute towards for­mu­lat­ing EGOV assess­ment frame­works and instru­ments.

Also ratio­nally,you can­not mea­sure what you can­not define”. There­fore, a pre­req­ui­site for explor­ing EGOV ini­tia­tives is to define the mean­ing of EGOV.

The con­cept of EGOV has been in cir­cu­la­tion for more than a decade. In lit­er­a­ture, there are many def­i­n­i­tions of EGOV [Ban­nis­ter and Con­nolly 2012]. The major­ity of these def­i­n­i­tions are com­pat­i­ble rather than con­tra­dic­tory to each other, and exhibit com­mon themes and shared threads. The study adopts Estevez and Janowski [2013] def­i­n­i­tion of EGOV asthe appli­ca­tion of tech­nol­ogy by gov­ern­ment to trans­form itself and its inter­ac­tions with cus­tomers, in order to cre­ate impact on the soci­ety” [Estevez and Janowski 2013].

It should be noted that there is still unclear­ness between the con­cepts of EGOV and elec­tronic gov­ern­ment (e-Gov­ern­ment). Both con­cepts are often used inter­change­ably or syn­ony­mously [Ban­nis­ter and Con­nolly 2012; Obi 2007]. While a detailed dis­cus­sion of dif­fer­en­ti­a­tions between both con­cepts is beyond the scope of this paper, sev­eral pub­li­ca­tions pointed out such dif­fer­ences [Ban­nis­ter and Con­nolly 2012; Estevez and Janowski 2013; Grön­lund and Horan 2005; Janowski 2015; Kol­saker and Lee-Kel­ley 2007; Obi 2007; Sakow­icz 2003]. Based on those pub­li­ca­tions, it could be con­cluded that while e-Gov­ern­ment con­cerns to the appli­ca­tion of ICT to admin­is­tra­tive gov­ern­ment func­tions and pub­lic ser­vice deliv­ery, EGOV involves these aspects plus enhance the con­ver­gence of soci­ety for more par­tic­i­pa­tion and infor­ma­tion shar­ing through encour­ag­ing cit­i­zen par­tic­i­pa­tion in deci­sion-mak­ing processes and mak­ing gov­ern­ment more open, account­able, and trans­par­ent.

Accord­ingly, this study con­sid­ers EGOV as the appli­ca­tion of ICT by gov­ern­ment for: (i) facil­i­tat­ing inter­nal gov­ern­ment oper­a­tions and admin­is­tra­tive reform; (ii) improv­ing gov­ern­ment elec­tronic pub­lic ser­vice qual­ity and deliv­ery that respond to the needs of cit­i­zens and busi­nesses; (iii) strength­en­ing the rela­tion­ship of a gov­ern­ment with its dif­fer­ent stake­hold­ers (cit­i­zens, civil soci­ety, non-gov­ern­ment orga­ni­za­tions, and the busi­ness sec­tor); (iv) increas­ing cit­i­zen par­tic­i­pa­tion in deci­sion-mak­ing processes; (v) rein­forc­ing goals of mak­ing gov­ern­ment more open, account­able, and trans­par­ent; and (vi) sup­port­ing the reach ofshared or par­tic­i­pa­tory soci­ety”. This EGOV per­spec­tive clearly cov­ers sev­eral ini­tia­tives imple­mented by gov­ern­ments, namely; e-Gov­ern­ment, e-Par­tic­i­pa­tion, e-Democ­racy, Open Gov­ern­ment, and infor­ma­tion soci­ety ori­en­ta­tion ini­tia­tives.

3. Study Design

This study is exploratory in nature. Exploratory research is often used to clar­ify and define the nature of the research prob­lem in hand and it often leads to future stud­ies, either by set­ting the grounds for fur­ther inves­ti­ga­tion or by prompt­ing the for­mu­la­tion of new research ques­tions [Saun­ders et al. 2011; Zik­mund et al. 2013]. Exploratory research fre­quently involves qual­i­ta­tive meth­ods, which com­prise one or more data col­lec­tion meth­ods, such as doc­u­ment analy­sis and review of pub­lished reports and infor­ma­tion from orga­ni­za­tions data­bases (sec­ondary sources), and/or inter­views, ques­tion­naires, and direct obser­va­tion (pri­mary sources) [Saun­ders et al. 2011; Zik­mund et al. 2013]. For this study, sec­ondary and pri­mary data was col­lected based on desk­top research and on a sur­vey using a ques­tion­naire.

Sec­ondary data was col­lected through a desk­top research of avail­able EGOV assess­ment obser­va­to­ries or ini­tia­tives, found by a process of Inter­net search­ing. The ini­tia­tives were iden­ti­fied through searches con­ducted on Google’s search engine by com­bin­ing the key­wordsE-Gov­er­nance” andE-Gov­ern­ment” withAssess­ment”,Eval­u­a­tion”,Frame­work”, andMon­i­tor”. The aim of this search was to iden­tify the over­all set of exist­ing EGOV assess­ment obser­va­to­ries and ini­tia­tives, either at an inter­na­tional, regional or national level, and inde­pen­dently of (i) the object of assess­ment con­sid­ered, (ii) the kind of assess­ment pro­duced (be it either a rank­ing, a bench­mark­ing, a char­ac­ter­i­za­tion, etc.), and (iii) the kind of insti­tu­tions con­duct­ing the assess­ment (i.e., gov­ern­men­tal insti­tu­tion, inter­na­tional insti­tu­tion, research insti­tu­tion, aca­d­e­mic insti­tu­tion or pri­vate insti­tu­tion).

To ensure that a com­pre­hen­sive set of ini­tia­tives and obser­va­to­ries would be found and analysed in this study, the desk­top search effort was com­ple­mented with a world­wide online ques­tion­naire deliv­ered to gov­ern­ment offi­cials respon­si­ble or at least famil­iar with EGOV ini­tia­tives devel­op­ment and/or assess­ment (pri­mary data), in the 193 coun­tries that are part of the list of coun­tries used by the Sta­tis­tics Divi­sion of UNDESA (https://pub­li­cad­min­is­tra­tion.un.org/egovkb/Resources/Coun­try-URLs).

This ques­tion­naire served two main pur­poses. On the one hand, to iden­tify national EGOV assess­ment ini­tia­tives that gov­ern­ments are con­duct­ing at coun­try level that may not have been dis­cov­ered dur­ing the desk­top search per­formed. On the other hand, the ques­tion­naire would also allow to gather addi­tional infor­ma­tion about: (i) what EGOV aspects are coun­tries assess­ing; (ii) who is/are the entity/enti­ties lead­ing EGOV assess­ment ini­tia­tives in each coun­try; and (iii) which EGOV aspects do respon­dents con­sider that should be mea­sured/assessed.

The ques­tion­naire, very short and con­tain­ing basi­cally open-ended ques­tions, was imple­mented using the Lime Sur­vey plat­form. The orig­i­nal ver­sion of the ques­tion­naire was writ­ten in Eng­lish. Given the fact that the sur­vey had an inter­na­tional scope, the ques­tion­naire was trans­lated into the rem­i­ning offi­cial lan­guages of the UN, namely Ara­bic, Chi­nese, French, Russ­ian, and Span­ish. The data col­lec­tion took place between 18 Octo­ber 2017 and 20 Novem­ber 2017.

The sur­vey prospected respon­dents have been iden­ti­fied through brows­ing the list of offi­cial gov­ern­ment por­tals world­wide used by UNDESA, as well as per­sonal con­tact lists of the team researchers. As this was an exploratory study, some coun­tries have received more than one invi­ta­tion based on avail­able con­tacts. Accord­ingly, 273 invi­ta­tions to par­tic­i­pate in the sur­vey were sent out. A total of 33 responses were received. From those, 12 responses were dis­carded because they were uncom­pleted. For three coun­tries (Angola, Brazil, and the Nether­lands) two responses were received and have been merged. Con­se­quently, 18 usable responses were accepted for data analy­sis. Next sec­tions 4 and 5 present the find­ings from the desk­top research and from the sur­vey.

4. International and Regional EGOV Assessment Initiatives

This sec­tion dis­cusses the find­ings related to some of the most well-known regional and inter­na­tional EGOV eval­u­a­tion ini­tia­tives that were found through the desk­top research, namely the UN E-Gov­ern­ment Sur­vey and Knowl­edge Data­base, the EU E-Gov­ern­ment Bench­mark, the Waseda–IAC Inter­na­tional Dig­i­tal Gov­ern­ment Rank­ing, the OECD Dig­i­tal Gov­ern­ment Trans­for­ma­tion, the World Bank Open Data Readi­ness Assess­ments, the WWW Foun­da­tion Open Data Barom­e­ter, and the OKN Global Open Data Index.

The dis­cus­sion pre­sented focuses on exam­in­ing such ini­tia­tives based on their scope and cov­er­age, mea­sure­ment frame­works applied, assess­ment tool, type of data col­lected, approach and method­ol­ogy fol­lowed. It also high­lights some observed strengths and lim­i­ta­tions of these ini­tia­tives.

4.1 UN E-Government Survey and Knowledge Database

The UN E-Gov­ern­ment Knowl­edge Data­base is an inter­ac­tive online obser­va­tory, cre­ated and man­aged by the Divi­sion for Pub­lic Admin­is­tra­tion and Devel­op­ment Man­age­ment (https://pub­li­cad­min­is­tra­tion.un.org/egovkb/enus/#.WleAM­ry­W­bIU). This obser­va­tory pro­vides free of charge access to all edi­tions of the UN e-Gov­ern­ment sur­vey (2003 to 2016).

The UN E-Gov­ern­ment Sur­vey aims to sup­port pol­icy mak­ers in shap­ing and strength­en­ing their e-Gov­ern­ment pro­grams. The sur­vey, which is car­ried out every two years, mea­sures gov­ern­ments’ capac­ity and effec­tive­ness to use ICT to deliver pub­lic ser­vices, which reflects on how a coun­try is using ICT to pro­mote access and inclu­sion of its peo­ple.

The UN report ranks a total of 193 coun­tries based on the EGDI index. The EGDI is a com­pos­ite index based on the weighted aver­age of three nor­mal­ized indices in the fol­low­ing way: one third is derived from the tele­com­mu­ni­ca­tion infra­struc­ture index (TII), which is based on data pro­vided by the Inter­na­tional Tele­com­mu­ni­ca­tions Union (ITU); another third is derived from the human cap­i­tal index (HCI), which is based on data pro­vided by UNESCO; and the last third is derived from the Online Ser­vice Index (OSI). The OSI is based on data col­lected by an inde­pen­dent experts’ sur­vey ques­tion­naire con­ducted under the super­vi­sion of UNDESA. The sur­vey focuses on a set of fea­tures related to online ser­vice deliv­ery, open gov­ern­ment data, and e-Par­tic­i­pa­tion. The main ele­ment of the eval­u­a­tion process is an exten­sive assess­ment of coun­tries’ offi­cial e-Gov­ern­ment por­tals and other related web­sites of min­istries or depart­ments. The web­sites are assessed by at least three experts (two local researchers and a senior researcher that reviews the assess­ment done by the local researchers).

This sur­vey focuses on the sup­ply side analy­sis, as it exam­ines e-Gov­ern­ment fea­tures avail­able on the national por­tals and related web­sites, clearly neglect­ing the demand-side. Addi­tion­ally, the UN eGov­ern­ment rank­ing is purely mea­sur­ing e-Gov­ern­ment por­tals and the num­ber/types of e-Gov­ern­ment ser­vices, rather than assess­ing the actual use of such ser­vices by cit­i­zens and busi­ness.

Although the method­olog­i­cal frame­work has remained con­sis­tent across the dif­fer­ent edi­tions, its com­po­nents have been updated to reflect emerg­ing trends of e-Gov­ern­ment strate­gies (e.g., advance­ment of mobile ser­vice deliv­ery, use of social media, and open gov­ern­ment data for pro­mot­ing effec­tive trans­par­ent and account­able gov­ern­ment), changes in tech­nol­ogy (e.g., inter­net of things, big data, and cloud com­put­ing), evolv­ing knowl­edge of best prac­tices in e-Gov­ern­ment, as well as new indi­ca­tors for tele­com­mu­ni­ca­tions and human cap­i­tal indices.

4.2 European Union E-Government Benchmark

The Euro­pean Com­mis­sion Direc­torate-Gen­eral of Com­mu­ni­ca­tions Net­works, Con­tent and Tech­nol­ogy reg­u­larly con­ducts a study on the state of play of e-Gov­ern­ment ser­vices across Europe (https://ec.europa.eu/dig­i­tal-sin­gle-mar­ket/). Known as the EU eGov­ern­ment Bench­mark­ing, this study aims to mea­sure the per­for­mance of the pub­lic sec­tor across Euro­pean Union mem­ber states and other Euro­pean coun­tries. The study focuses on exam­in­ing trends, issues, inno­v­a­tive prac­tices, chal­lenges, and oppor­tu­ni­ties of e-Gov­ern­ment devel­op­ment across Europe.

The 14 reports (includ­ing eGov­ern­ment Bench­mark­ing back­ground and insight reports) that have been pub­lished so far pro­vide up to date infor­ma­tion on the advance­ment of e-Gov­ern­ment in Euro­pean coun­tries and sug­gest fur­ther actions to over­come poten­tial gaps. The reports also pro­mote stan­dards and guide­lines for future and ambi­tious imple­men­ta­tion of eGov­ern­ment ser­vices.
The bench­mark stud­ies con­ducted fol­low a method calledMys­tery Shop­ping”. A mys­tery shop­per is trained to act as a prospec­tive user to observe and mea­sure a given pub­lic ser­vice process. To ensure reli­a­bil­ity and pro­fes­sion­al­ism each mys­tery shop­per fol­lows a detailed and objec­tive eval­u­a­tion check­list.

The assess­ment done fol­lows theeGov­ern­ment Bench­mark Frame­work”. This frame­work has evolved over the years in order to keep up with theEuro­pean e-Gov­ern­ment Action Plans” pri­or­ity areas, as well as with tech­no­log­i­cal and orga­ni­za­tional devel­op­ments. The cur­rent ver­sion of the frame­work, in use since 2016, has been devel­oped to imple­ment the Euro­pean e-Gov­ern­ment action plan (2016-2020). For this rea­son, its dimen­sions or indi­ca­tors are clearly evolved and struc­tured in line with the main pri­or­ity areas included in the eGov­ern­ment action plan, which are: (i) mod­ern­ize pub­lic admin­is­tra­tion with ICT; (ii) use key dig­i­tal enablers; (iii) enable cross-bor­der mobil­ity with inter­op­er­a­ble dig­i­tal pub­lic ser­vices; and (iv) facil­i­tate dig­i­tal inter­ac­tion between admin­is­tra­tions and cit­i­zens/busi­nesses for high-qual­ity pub­lic ser­vices. The progress in these areas is mea­sured via top-level bench­marks, user-cen­tric gov­ern­ment, trans­par­ent gov­ern­ment, cross-bor­der mobil­ity, and key enablers. The strong align­ment that exists between the frame­work dimen­sions and the plan areas ensures a more ade­quate mea­sure­ment of progress of e-Gov­ern­ment ser­vice in such areas.

Sim­i­larly to the UN e-Gov­ern­ment Sur­vey, the EU study focus on the sup­ply side of online e-gov­ern­ment ser­vices. The study exam­ines the quan­tity and qual­ity of e-Ser­vices but in a gov­ern­ment per­spec­tive, leav­ing other oppor­tu­ni­ties to explore the demand side. The study con­sid­ers both cit­i­zen and busi­ness per­spec­tives, since the eval­u­a­tion process of online ser­vice offered by gov­ern­ment cov­ered tracks ser­vices offered to both cit­i­zens and busi­nesses.

4.3 WASEDA-IAC International Digital Government Ranking

The Insti­tute of e-Gov­ern­ment at Waseda Uni­ver­sity, Tokyo, in coop­er­a­tion with the Inter­na­tional Acad­emy of CIO (IAC), has per­formed its e-Gov­ern­ment rank­ing for the first time in 2005. The Waseda-IAC Inter­na­tional e-Gov­ern­ment rank­ing aims to address the progress of e-Gov­ern­ment devel­op­ment, to iden­tify new trends on e-Gov­ern­ment devel­op­ment, and to share best prac­tices among par­tic­i­pat­ing coun­tries (http://www.e-gov.waseda.ac.jp/). So far, 65 coun­tries included in the rank­ing and 13 annual rank­ings reports have been issued.

The Waseda-IAC rank­ing and eval­u­a­tion indi­ca­tors (bench­mark­ing) were con­tin­u­ously evolved and improved to fit the cur­rent devel­op­ment and appli­ca­tions of ICT in pub­lic sec­tor and respond to the new chal­lenges of e-Gov­ern­ment imple­men­ta­tion. A recently-intro­duced change was to reflect the trans­for­ma­tion from e-Gov­ern­ment to dig­i­tal gov­ern­ment, as a per­spec­tive which cov­ers more com­pre­hen­sive gov­ern­ment activ­i­ties. Accord­ingly, the Waseda-IAC rank­ing defines and uses a set of com­pre­hen­sive para­me­ters, which include 10 indi­ca­tors; (i) net­work pre­pared­ness/dig­i­tal infra­struc­ture;(ii) man­age­ment opti­miza­tion; (iii) online ser­vice/appli­ca­tions; (iv) national por­tal/home­page;(v) gov­ern­ment chief infor­ma­tion offi­cer; (vi) dig­i­tal gov­ern­ment pro­mo­tion; (vii) e-par­tic­i­pa­tion and dig­i­tal inclu­sion; (viii) open gov­ern­ment data; (ix) cyber secu­rity; and (x) the use of emerg­ing ICT.

Besides con­tain­ing a rel­a­tively com­pre­hen­sive set of indi­ca­tors for bench­mark­ing, the Waseda-IAC also focuses on new trends in e-Gov­ern­ment devel­op­ment, such as inter­net econ­omy, cloud com­put­ing, big data, social media, inter­net of things, and cyber secu­rity. Fur­ther, the rank­ing con­sid­ers the rela­tion­ship between gov­ern­ments and their stake­hold­ers and high­lights the impor­tance of Gov­ern­ment Chief Infor­ma­tion Offi­cers (GCIOs).

4.4 Digital Government Transformation Organization for Economic Co-operation and Development

The Orga­ni­za­tion for Eco­nomic Co-oper­a­tion and Devel­op­ment (OECD), specif­i­cally its inter­nal Direc­torate for Pub­lic Gov­er­nance and Ter­ri­to­r­ial Devel­op­ment, aims to assist its 35 Mem­ber States (besides other part­ner coun­tries) in their efforts toward fos­ter­ing dig­i­tal trans­for­ma­tion of the pub­lic sec­tor. This dig­i­tal trans­for­ma­tion can be char­ac­ter­ized as a shift from e-Gov­ern­ment–the gov­ern­ments use of ICT, par­tic­u­larly the Inter­net, as a tool to achieve bet­ter gov­ern­ment–to dig­i­tal gov­ern­ment–the use of dig­i­tal tech­nolo­gies, as an inte­grated part of gov­ern­ments’ mod­ern­iza­tion strate­gies to cre­ate pub­lic value–in order to real­ize a fully devel­oped, more open and effi­cient gov­ern­ment and cit­i­zen engage­ment through dig­i­tal gov­ern­ment poli­cies.

For such pur­pose, and par­tic­u­larly after 2014, OECD has been per­form­ing a series of dig­i­tal gov­ern­ment stud­ies (in con­tin­u­a­tion of its pre­vi­ous stud­ies series related to e-Gov­ern­ment (2003-2013) (http://www.oecd-ili­brary.org/gov­er­nance/oecd-dig­i­tal-gov­ern­men t-stud­ies_24131962). Such stud­ies pro­mote the exchange of expe­ri­ences, knowl­edge and best prac­tices regard­ing dig­i­tal trans­for­ma­tion, con­tribut­ing to social, eco­nomic and envi­ron­men­tal devel­op­ment.

The dig­i­tal gov­ern­ment stud­ies are based on ana­lyt­i­cal frame­works for dig­i­tal gov­ern­ment, for open gov­ern­ment data, and for a data-dri­ven pub­lic sec­tor devel­oped by OECD based on the 2014 OECD Rec­om­men­da­tion of the Coun­cil on Dig­i­tal Gov­ern­ment Strate­gies. The OECD rec­om­men­da­tion includes twelve key prin­ci­ples grouped in three main pil­lars: (i) open­ness and engage­ment pil­lar, embrac­ing four prin­ci­ples: open­ness, trans­parency and inclu­sive­ness, engage­ment and par­tic­i­pa­tion, cre­ation of a data-dri­ven cul­ture in the pub­lic sec­tor, and pro­tect­ing pri­vacy and ensur­ing secu­rity; (ii) gov­er­nance and coor­di­na­tion pil­lar, encom­pass­ing four prin­ci­ples: lead­er­ship and polit­i­cal com­mit­ment, coher­ent use of dig­i­tal tech­nol­ogy across pol­icy areas, effec­tive orga­ni­za­tional and gov­er­nance frame­works, and strengthen inter­na­tional co-oper­a­tion with other gov­ern­ments; and (iii) Capac­i­ties to Sup­port Imple­men­ta­tion pil­lar con­tain­ing four prin­ci­ples: devel­op­ment of clear busi­ness cases, rein­forced ICT project man­age­ment capac­i­ties, pro­cure­ment of dig­i­tal tech­nolo­gies, and legal and reg­u­la­tory frame­work.

To mea­sure a given coun­try progress in imple­ment­ing the OECD Rec­om­men­da­tion, the OECD Direc­torate for Pub­lic Gov­er­nance and Ter­ri­to­r­ial Devel­op­ment con­ducts two sur­veys on Dig­i­tal Gov­ern­ment Per­for­mance and on Open Gov­ern­ment Data across OECD mem­ber coun­tries and two part­ner coun­tries.

The sur­vey on Dig­i­tal Gov­ern­ment Per­for­mance aims to inves­ti­gate progress on gov­ern­ments’ per­for­mance on the dig­i­ti­za­tion of the pub­lic sec­tor. The sur­vey focuses on eleven dig­i­tal gov­ern­ment trans­for­ma­tion areas: ICT strate­gies, dig­i­tal rights and oblig­a­tions, gov­er­nance, ICT project man­age­ment, ICT busi­ness cases-meth­ods for mea­sur­ing the value propo­si­tion, finan­cial ben­e­fits for the cen­tral gov­ern­ment, finan­cial ben­e­fits out­side of the pub­lic sec­tor, HR strat­egy to develop ICT-skills in gov­ern­ment, ICT pro­cure­ment, online ser­vice deliv­ery and trans­ac­tion costs, and using national online por­tals.

The Open Gov­ern­ment Data sur­vey focuses on three main aspects of open gov­ern­ment: open gov­ern­ment data poli­cies and gov­er­nance frame­work, open gov­ern­ment data imple­men­ta­tion, and open gov­ern­ment data impact. The sur­vey bench­marks OECD coun­tries and part­ners by using the OECD Open Use­ful and Reusable Data Index.

The data col­lec­tion for both sur­veys is gath­ered through an online ques­tion­naire deliv­ered to gov­ern­ment offi­cials, pre­dom­i­nantly chief infor­ma­tion offi­cers. The results of both sur­veys are lim­ited to cen­tral/fed­eral gov­ern­ments and exclude dig­i­tal gov­ern­ment and open gov­ern­ment data prac­tices at the state/local lev­els.

4.5 Open Government Assessment

Three main assess­ment ini­tia­tives related with open data were found. Each one of them is described in the fol­low­ing sec­tions

4.5.1 World Bank Open Data Readiness Assessment

The Open Data Readi­ness Assess­ment tool (ODRA), devel­oped by the World Bank (WB), is a diag­nos­tic and plan­ning tool aimed at help­ing gov­ern­ments to design and imple­ment open data ini­tia­tives. (http://open­data­toolkit.world­bank.org/en/odra.html), by pro­vid­ing qual­i­ta­tive data and action-ori­ented rec­om­men­da­tions. ODRA focus on an open data readi­ness assess­ment tool and ini­ti­ates con­sul­ta­tive dia­logue among rel­e­vant stake­hold­ers regard­ing open data.

ODRA is based on experts’ eval­u­a­tion of sev­eral dimen­sions that may have impact on open data ini­tia­tives, namely: senior lead­er­ship; pol­icy/legal frame­work; insti­tu­tional struc­tures; respon­si­bil­i­ties and capa­bil­i­ties within gov­ern­ment; gov­ern­ment data man­age­ment poli­cies and pro­ce­dures; demand for open data; civic engage­ment and capa­bil­i­ties for open data; fund­ing an open data pro­gram; and skills infra­struc­ture. Within each dimen­sion, the assess­ment con­sid­ers a set of indi­ca­tors or ques­tions. The exam­in­ers (experts) are a joint team of WB experts, experts from the coun­try to which the open data ini­tia­tives belong to, and experts from other national and inter­na­tional donor agen­cies, such as the United Nations Devel­op­ment Pro­gram.

The ODRA tool is avail­able free of charge to adapt and use. WB also offers an abil­ity for coun­tries world­wide to request the involve­ment of WB to per­form an assess­ment study. From 2013–2016, 13 coun­tries con­ducted open data readi­ness assess­ment stud­ies in col­lab­o­ra­tion with the WB. The WB web­site offers a full report, includ­ing the results of such stud­ies in dif­fer­ent lan­guages, namely in Eng­lish, French, Russ­ian, and Span­ish.

4.5.2 Open Data Barometer (ODB)

The Open Data Barom­e­ter (ODB), run by the World Wide Web Foun­da­tion, assesses gov­ern­ments efforts and per­for­mance in ful­fill­ing the open data prin­ci­ples. These prin­ci­ples man­date that data should be: open by default, timely and com­pre­hen­sive, acces­si­ble and usable, com­pa­ra­ble and inter­op­er­a­ble, improve gov­er­nance and cit­i­zen engage­ment, and it should be for inclu­sive devel­op­ment and inno­va­tion (http://open­data­barom­e­ter.org).

To ful­fil such pur­pose, ODB pro­duces a yearly global mea­sure­ment of how gov­ern­ments are pub­lish­ing and using open data for account­abil­ity, inno­va­tion, and social impact. Thus, the Barom­e­ter ranks gov­ern­ments based on a frame­work that con­sists of three major dimen­sions, each con­tain­ing sev­eral sub-indices. The first dimen­sion isreadi­ness of gov­ern­ments for open data ini­tia­tives”. The data for this dimen­sion is taken from the World Eco­nomic Forum, WB, United Nations e-Gov­ern­ment Sur­vey, and Free­dom House. The sec­ond dimen­sion isimple­men­ta­tion of open data by gov­ern­ments”.

The imple­men­ta­tion assess­ment process cov­ers open data qual­ity of 15 kinds of data in each coun­try (map­ping data, land own­er­ship data, national sta­tis­tics, detailed bud­get data, gov­ern­ment spend data, com­pany reg­is­tra­tion data, leg­is­la­tion data, inter­na­tional trade data, pub­lic trans­port timetable data, health sec­tor per­for­mance data, pri­mary and sec­ondary edu­ca­tion per­for­mance data, crime sta­tis­tics data, national envi­ron­men­tal sta­tis­tics data, national elec­tion results data, and pub­lic con­tract­ing data). Finally, the third dimen­sion isimpact of open data on cit­i­zens’ life” and is related with what impact is open data hav­ing on social, pol­i­tics and eco­nomic. For imple­men­ta­tion and impact dimen­sions, a peer reviewed expert sur­vey con­tain­ing a range of ques­tions about open data con­texts, pol­icy, imple­men­ta­tion, and impacts is used.

The 2016 ODB rank­ing assessed and ranked 115 coun­tries. The ODB web­site pro­vides fea­tures to sort all sur­veyed coun­tries. All edi­tions reports are free of charge and avail­able to down­load in three lan­guages: Eng­lish, French, and Span­ish.

4.5.3 Global Open Data Index (GODI)

The Global Open Data Index (GODI) is the annual global bench­mark for the open­ness of gov­ern­ment data, man­aged by the Open Knowl­edge Net­work (OKN) (https://index.okfn.org/).

The pri­mary idea of this index is to assess how are gov­ern­ments around the world pub­lish­ing open data. To reach such aim, the index assesses open gov­ern­ment data in 15 the­matic areas con­sid­ered rel­e­vant for civil soci­ety (gov­ern­ment bud­get, national sta­tis­tics, pro­cure­ment, national laws, admin­is­tra­tive bound­aries, draft leg­is­la­tion, air qual­ity, national maps, weather fore­cast, com­pany reg­is­ter, elec­tion, results, loca­tions, water qual­ity, gov­ern­ment spend­ing, and land own­er­ship). GODI assesses these 15 areas based on sev­eral indi­ca­tors: avail­able online, open-licensed, machine-read­able, avail­able in bulk, and avail­able free of charge. GODI method­ol­ogy is based on crowd­sourc­ing data col­lec­tion, mean­ing that any­one can review and sub­mit its eval­u­a­tions, which should be pro­vided with some qual­i­ta­tive jus­ti­fi­ca­tions. To val­i­date the crowd­sourc­ing evo­lu­tions, the inputs are reviewed by a team of pro­fes­sion­als from OKN. GODI does not look at other aspects of the com­mon open data assess­ment frame­work such as use and impact. Fur­ther, no rec­om­men­da­tions or improve­ment actions are sug­gested.

5. National EGOV Assessment Initiatives

The most widely known inter­na­tional EGOV assess­ment ini­tia­tives were describe in the pre­vi­ous sec­tion. Those ini­tia­tives assess and com­pare EGOV in mul­ti­ple coun­tries. In this sec­tion, the focus is on explor­ing how EGOV assess­ment is per­ceived and con­ducted inter­nally (at a national level) by coun­tries. The sec­tion starts by pre­sent­ing five EGOV assess­ment ini­tia­tives–for Nor­way, Ger­many, India, Saudi Ara­bia, and United Arab Emi­rates–, which were iden­ti­fied dur­ing the process of desk­top research. Gen­eral find­ings of a world­wide sur­vey on national EGOV assess­ment ini­tia­tives are pre­sented in the sec­ond part.

5.1 National EGOV Assessment Initiatives Identified through Desktop Research

5.1.1 The Agency for Public Management and eGovernment (Difi), Norway

The Agency for Pub­lic Man­age­ment and eGov­ern­ment (Difi) (https://www.difi.no) dri­ves and over­sees the imple­men­ta­tion of the dig­i­tal gov­ern­ment strat­egy in Nor­way. As part of the con­trol­ling and mon­i­tor­ing process, Difi mon­i­tors the progress of the dig­i­tal gov­ern­ment strate­gic plan and plays key advi­sory, audit, and tech­ni­cal roles when Nor­we­gian’s pub­lic sec­tor insti­tu­tions seek and sub­con­tract con­sul­ta­tion or advice ser­vices from pri­vate con­sul­ta­tion firms.

Difi dis­trib­utes sta­tis­tics and directs annual reviews of pub­lic dig­i­tal ser­vices and web­sites using an eval­u­a­tion instru­ment calledQual­ity Web Pro­ject”. This instru­ment includes 33 well defined indi­ca­tors that cover six per­spec­tives of analy­sis: (i) Web­site and ser­vices are easy to find; (ii) Web­site and ser­vices are cred­i­ble; (iii) Web­site and ser­vices are safe to use; (iv) Web­site and ser­vices work well; (v) Web­site and ser­vices are easy to use for every­one; and (vi) It easy to get help. The instru­ment seems to fol­low a clear and rig­or­ous process, as the test­ing indi­ca­tors for each cri­te­rion are pre­cisely set and well explained. This sur­vey usu­ally cov­ers a large num­ber of cit­i­zens’ expe­ri­ences with pub­lic ser­vices (e.g., 11 567 ran­domly selected res­i­dents par­tic­i­pated in the 2016 sur­vey).

Fur­ther, Difi’s assess­ment ini­tia­tives are not only focused on cit­i­zens. Other reports con­cern­ing the sta­tus of dig­i­tal­iza­tion in the pub­lic sec­tor, of dig­i­tal­iza­tion and change of admin­is­tra­tive process, of open pub­lic data, and of employ­ees’ sat­is­fac­tion are also led by Difi.

Difi’s eval­u­a­tion instru­ment, reports, and sta­tis­tics infor­ma­tion are free of charge avail­able to down­load, but most of them only in Nor­we­gian lan­guage. This could be under­stood as the eval­u­a­tion process is designed for and focused on Nor­we­gian pub­lic sec­tor. How­ever, it hin­ders the exchange and wide­spread of assess­ment prac­tices between coun­tries in a global scope.

5.1.2 EGovernment MONITOR, Germany

EGov­ern­ment MON­I­TOR ini­tia­tive has been exam­in­ing and assess­ing the cur­rent eGov­ern­ment sit­u­a­tion in Ger­many since 2010 (http://www.egov­ern­ment-mon­i­tor.de/start­seite.html).

eGov­ern­ment MON­I­TOR aims at improv­ing the accep­tance and the design of e-Gov­ern­ment online ser­vices by col­lect­ing infor­ma­tion on user pref­er­ences and user behav­iors. Since 2012 onwards, Aus­tria and Switzer­land were encom­passed in the sur­vey/report as ana­logue coun­tries. eGov­ern­ment MON­I­TOR draws up an annual report on the sta­tus appraisal of imple­men­ta­tion of the eGov­ern­ment. The report is a result of a quan­ti­ta­tive sur­vey on the use and accep­tance of e-Gov­ern­ment ser­vices, pri­vacy con­cerns, cit­i­zens’ sat­is­fac­tion, and e-Gov­ern­ment dri­vers and bar­ri­ers. The sur­vey data col­lec­tion typ­i­cally uses com­puter-assisted web inter­view from around 3000 cit­i­zens among the three coun­tries. The annual reports are free of charge to down­load but in Ger­man lan­guage only.

This ini­tia­tive results from a jointly effort con­ducted by NEGZ (National E-Gov­ern­ment Com­pe­tence Cen­ter–www.negz.org) and ISPRAT (Inter­dis­ci­pli­nary Stud­ies on Pol­i­tics, Law, Admin­is­tra­tion and Tech­nol­ogy–http://www.isprat.net). NEGZ and ISPRAT work to advance the mod­ern­iza­tion process of the pub­lic sec­tor using ICT in fed­eral, state and local gov­ern­ments.

ISPRAT project issued sev­eral reports for years 2007-2017. The project ana­lyzes and char­ac­ter­izes the cur­rent sit­u­a­tion of national and local EGOV ini­tia­tives in Ger­many and pro­poses rec­om­men­da­tion toward effec­tive imple­men­ta­tion of those ini­tia­tives. The project assists and eval­u­ates var­i­ous aspects of EGOV ini­tia­tives, such as: gov­ern­ment poli­cies and strate­gies, open gov­ern­ment data, mobile gov­ern­ment ser­vices and appli­ca­tions, pub­lic e-ser­vices (use, design, and deliv­ery), , e-par­tic­i­pa­tion activ­i­ties, and the explo­ration of emerg­ing tech­nolo­gies such as cloud com­put­ing and Web 2.0 appli­ca­tions (e.g., social media) for pub­lic admin­is­tra­tion.

All NEGZ and ISPART reports are freely avail­able online but in Ger­man lan­guage only. For this rea­son, it makes hard to pro­mote inter­na­tional exchange of expe­ri­ence of admin­is­tra­tive mod­ern­iza­tion and this may pre­vent shar­ing good prac­tices for EGOV devel­op­ment between coun­tries.

5.1.3 EGOV Assessment Initiatives, India

Under theDig­i­tal India” pro­gram, which aims at trans­form­ing India into a dig­i­tally empow­ered soci­ety and knowl­edge econ­omy, sev­eral ini­tia­tives have been taken for the growth of EGOV in the coun­try. The imple­men­ta­tion, coor­di­na­tion, and mon­i­tor­ing of EGOV project/ini­tia­tives are man­dated to Min­istry of Elec­tron­ics & Infor­ma­tion Tech­nol­ogy (MeitY).

As part of its assess­ment strat­egy, MeitY has been assess­ing EGOV projects across India to under­stand their impact and effec­tive­ness across projects and across imple­men­ta­tion geo­gra­phies (http://meity.gov.in/con­tent/assess­ment-e-gov­er­nance-projects). For such pur­pose, an assess­ment frame­work has been devel­oped and con­tin­u­ously improved, and cus­tomized to each project.

The assess­ment frame­work con­sists of four major dimen­sions, each com­pris­ing sev­eral indi­ca­tors. The dimen­sions are: cost of ser­vice, qual­ity of ser­vice, qual­ity of gov­er­nance, and over­all assess­ment. Based on such assess­ment frame­work, two main assess­ments ini­tia­tives were con­ducted, one in 2007-2008 and other in 2009-2010, cov­er­ing sev­eral national and state-level EGOV projects. The 2007-2008 ini­tia­tive com­prised the assess­ment of three national-level projects using a sur­vey of 7000-9000 cit­i­zens and the assess­ment of three state-level projects across thir­teen states. A total sam­ple size between 600-800 cit­i­zens per state was used in these state-level project assess­ments. This state-level assess­ment was based on a sur­vey using a con­trol and a treat­ment group (i.e., users who have expe­ri­enced man­ual sys­tem and users who have expe­ri­enced the com­put­er­ized sys­tem). The 2009-2010 ini­tia­tive com­prised two impact assess­ments, cov­er­ing four urban local bod­ies and five states, as well as one base­line assess­ment ini­tia­tive in other five states. Cur­rently, five assess­ment stud­ies on EGOV ini­tia­tives are being car­ried out using a revised ver­sion of the assess­ment frame­work.

DeitY has also insti­tu­tional link­ages with the National Infor­mat­ics Cen­tre (NIC) for man­age­ment and mon­i­tor­ing EGOV projects through ensur­ing the shar­ing of stan­dards, infor­ma­tion, and seam­less inter­op­er­abil­ity of data. The DeitY and NIC reports pro­vide a wealth of infor­ma­tion on var­i­ous assess­ments aspects and on guide­lines of EGOV ini­tia­tives. The assess­ment types con­sid­ered, and the method­ol­ogy used, give rea­son­able and ade­quate con­sid­er­a­tion to the assess­ment of EGOV projects that are either at an early stage of imple­men­ta­tion or after a period of its exe­cu­tion. Addi­tion­ally, sev­eral reports by NIC fol­low a qual­i­ta­tive approach, rather than rely­ing only on the typ­i­cal sta­tis­tics, show­ing suc­cess­ful cases and dis­cussing good prac­tices.

5.1.4 E-Government Transformation Measurement, Saudi Arabia

The Saudi Ara­bia’s e-Gov­ern­ment pro­gramYESSER” has launched an eval­u­a­tion project callede-Gov­ern­ment Trans­for­ma­tion Mea­sure­ment”. The project aims to eval­u­ate the fac­tual sta­tus of gov­ern­ment enti­ties related to e-Gov­ern­ment trans­for­ma­tion.(https://www.yesser.gov.sa/EN/Trans­for­ma­tion_Indi­ca­tors/trans­for­ma­tion_mea­sure­ment_mech­a­nism/Pages/about_mea­sure­ment.aspx). For this pur­pose, a frame­work has been devel­oped. This frame­work com­prises four mea­sure­ment phases of e-ser­vices devel­op­ment, namely the Build­ing Phase, the Avail­abil­ity Phase, the Excel­lence and Enhance­ment Phase, and the Inte­gra­tion Phase.

YESSER pro­gram has devel­oped a very spe­cific method­ol­ogy to be exe­cuted through the dif­fer­ent phases of appli­ca­tion. The agency which com­pletes the require­ments of a mea­sure­ment phase moves to the next phase and so on. Each phase of the frame­work con­sid­ers spe­cific themes and indices as fol­lows: the Build­ing Phase includes themes such as enter­prise setup of the e-Gov­ern­ment, tech­nol­ogy struc­ture, infor­ma­tional envi­ron­ment, and e-ser­vices pro­vided; the Avail­abil­ity Phase includes themes such as impact, elec­tronic empow­er­ment, and e-ser­vices made avail­able by agen­cies; the Excel­lence and Enhance­ment Phase includes tran­si­tion pio­neer­ing, empow­er­ment, and ser­vices qual­ity; and finally, the Inte­gra­tion Phase includes three per­spec­tives: (1) a struc­ture per­spec­tive, which com­prises ele­ments that enable pro­vid­ing the required e-gov­ern­ment ser­vices effec­tively; (2) a human resources per­spec­tive, which cov­ers, on the one hand, the gov­ern­ment employ­eesser­vices providers”, and, on the other hand, cit­i­zens or cus­tomers; and (3) a man­age­ment per­spec­tive, which focuses on the ele­ments of thetrans­ac­tion pro­ce­dures” needed, by includ­ing two themes: basic basic require­ments and secu­rity require­ments.

Since 2007, YESSER has car­ried out and man­aged seven mea­sure­ment ini­tia­tives. Those ini­tia­tives have mea­sured the trans­for­ma­tion appli­ca­tions in more than 160 gov­ern­ment agen­cies from most gov­ern­men­tal sec­tors, e.g., health, edu­ca­tion, finance, for­eign and tourism affairs, trans­porta­tion, envi­ron­ment and social affairs, mil­i­tary, and inter­nal secu­rity. The mea­sure­ment results are avail­able on the web­site in Ara­bic and Eng­lish lan­guages.

5.1.5 Smart Government Indices, United Arab Emirates (UAE)

UAE’s smart gov­ern­ment indices project is over­seen by the Tele­com­mu­ni­ca­tions Reg­u­la­tory Author­ity and the Prime Min­is­ter’s Office (https://gov­ern­ment.ae/en/infor­ma­tion-and-ser­vices/g2g-ser­vices/mea­sur­ing-the-kpis). The project aims to strengthen UAE’s pub­lic sec­tor through mea­sur­ing the devel­op­ment of gov­ern­ment elec­tronic and mobile ser­vices. The mea­sure­ment process is based on the def­i­n­i­tion of sev­eral indices. The indices focus on the level of pub­lic aware­ness, usage, and sat­is­fac­tion of elec­tronic/mobile ser­vices. The project sug­gests an online sur­vey tool to be used by gov­ern­ment enti­ties. How­ever, such sur­vey tool is avail­able only for gov­ern­ment enti­ties (it is not avail­able to be viewed or to be free down­loaded). Addi­tion­ally, no infor­ma­tion and pub­li­ca­tions (e.g., results or reports) were found regard­ing the assess­ment projects.

5.2 General Findings of the Survey on National EGOV Assessment Initiatives

This sec­tion presents the main find­ings from the ques­tion­naire applied to col­lect data on exist­ing national EGOV assess­ment ini­tia­tives in the 193 UN Mem­ber-states. The ques­tion­naire was focused on (i) iden­ti­fy­ing the coun­tries which have EGOV assess­ment ini­tia­tives, (ii) the offi­cial insti­tu­tion/orga­ni­za­tion respon­si­ble for assess­ing and mon­i­tor­ing EGOV, (iii) the EGOV aspects that are cur­rently mea­sured/assessed in each coun­try, and (iv) what addi­tional aspects not cur­rently assessed should be con­sid­ered in future assess­ments.

Responses were obtained from 18 coun­tries: Afghanistan, Angola, Brazil, Cabo Verde, Den­mark, Esto­nia, Fin­land, Ger­many, Ghana, Ire­land, Latvia, the Nether­lands, Nor­way, Oman, Pak­istan, the Philip­pines, Por­tu­gal, and Slove­nia. The analy­sis of the responses pro­vided some inter­est­ing find­ings, sum­ma­rized in next para­graphs.

Find­ing 1: The assess­ment and mon­i­tor­ing exer­cises of EGOV seem to be a pri­or­ity for many coun­tries. Sur­vey results indi­cate that most coun­tries have a min­istry and/or sin­gle agency func­tion/unit respon­si­ble for mon­i­tor­ing and eval­u­at­ing the imple­men­ta­tion and the progress of EGOV. Fur­ther, the major­ity of coun­tries, except three (Cabo Verde, Ghana, and Pak­istan) are cur­rently imple­ment­ing an eval­u­a­tion projects to assess and to mon­i­tor EGOV ini­tia­tives.

Find­ing 2: Inter­est­ingly, EGOV assess­ment in sev­eral coun­tries is no longer only respon­si­bil­ity of ICT min­istries but are man­dated to min­istries with devel­op­ment, man­age­ment, and admin­is­tra­tive mod­ern­iza­tion func­tions. For instance, Min­istry of the Pres­i­dency and Admin­is­tra­tive Mod­ern­iza­tion in Por­tu­gal, Min­istry of Pub­lic Admin­is­tra­tion in Slove­nia, Min­istry of Plan­ning, Devel­op­ment and Man­age­ment in Brazil, Min­istry of Eco­nomic Affairs and Com­mu­ni­ca­tions in Esto­nia, and Min­istry of Finance in Fin­land.

Find­ing 3: For the assess­ment process, gen­eral dif­fer­ent approaches could be noticed. In some coun­tries, such as Nor­way, assess­ment ini­tia­tives are per­formed by a cen­tral­ized agency. In other coun­tries (e.g., Den­mark, Fin­land, and Por­tu­gal) the assess­ment is per­formed by a cen­tral agency in coop­er­a­tion with other state depart­ments. Coun­tries such as Esto­nia, Slove­nia and Oman man­dated assess­ment ini­tia­tives to each pub­lic unit (ser­vice provider), to inde­pen­dently per­form their own assess­ment ini­tia­tives, but still under super­vi­sion of a cen­tral agency. There are also coun­tries that per­form the assess­ment ini­tia­tives in col­lab­o­ra­tion with pri­vate research com­pa­nies (e.g., Latvia) or with research insti­tu­tions (e.g., Ger­many). Finally, in other coun­tries, such as Afghanistan, the EGOV assess­ment is per­formed only when some col­lab­o­ra­tion is requested by inter­na­tional orga­ni­za­tion (e.g., World Bank).

Find­ing 4: While the major­ity of coun­tries sur­veyed pro­vide var­i­ous type of sta­tis­tics online, reports and stud­ies, most of them are pub­lished in their offi­cial lan­guages. Few have been found in Eng­lish for exam­ple. This fact prej­u­dices the shar­ing of assess­ment prac­tices and expe­ri­ences.

Find­ing 5: The assess­ment ini­tia­tives of clas­sic e-Gov­ern­ment ser­vices dom­i­nate the exis­tent EGOV assess­ment ini­tia­tives (e.g., the level of online ser­vices pro­vi­sion, acces­si­bil­ity, usabil­ity deliv­ery, qual­ity, adop­tion, and sat­is­fac­tion). It is notable that emerg­ing efforts to assess other EGOV aspect such e-Par­tic­i­pa­tion and open gov­ern­ment data are still lag­ging.

Sev­eral respon­dents high­lighted the need to assess other EGOV ini­tia­tives such as the aware­ness of online ser­vice applic­a­bil­ity (Latvia), user involve­ment (the Nether­lands), lev­els of usage and matu­rity of e-ser­vices (Fin­land and the Nether­lands), the adop­tion of mobile ser­vices and cyber­se­cu­rity (Oman), open data ini­tia­tives (Oman and Brazil), and indi­ca­tors of effi­ciency and effec­tive­ness (e.g., trans­ac­tional costs) (Den­mark). Respon­dents also stressed the need to set global stan­dard­ized assess­ments for user sat­is­fac­tion (Slove­nia and Pak­istan) and the impact of EGOV ini­tia­tives on soci­ety was also an issue high­lighted (Pak­istan).

6. Discussion

Assess­ment of EGOV ini­tia­tives con­sti­tutes a cen­tral issue for gov­ern­ments, since it can point pol­icy mak­ers and prac­ti­tion­ers in the right direc­tion. This may explain the increas­ing inter­est in assess­ing EGOV evi­denced by coun­tries as well as by inter­na­tional orga­ni­za­tions and enti­ties. Based on the analy­sis and find­ings pre­sented in pre­vi­ous sec­tions, some sug­ges­tions that may be con­sid­ered when plan­ning, design­ing and imple­ment­ing EGOV assess­ment ini­tia­tives are pre­sented in this sec­tion.

1) Toward more demand side per­spec­tive

As noted from the analy­sis, there is a great inter­est to assess the sup­ply side of EGOV ini­tia­tives. Most of the ini­tia­tives found, namely the inter­na­tional assess­ments, are focused on this. But, the demand side of the equa­tion requires fur­ther assess­ment as well. In fact, gov­ern­ments (who launch, develop, and imple­ment EGOV ini­tia­tives) and users (i.e., cit­i­zens and busi­nesses) are often in divided roles, cre­at­ing a gap­ing hole between what is designed and pro­vided and what are the really needs of cit­i­zens.

The UN and EU e-Gov­ern­ment reg­u­lar sur­veys and bench­mark, for exam­ple, are wealth of infor­ma­tion regard­ing e-Gov­ern­ment ser­vices offered by coun­tries world­wide. With­out doubts, assess­ing the sup­ply side is very sig­nif­i­cant but it does not, by itself, con­sti­tute or guar­an­tees advanced e-Gov­ern­ment devel­op­ment and adop­tion. The real users’ aware­ness, will­ing­ness, abil­ity, actual usage, expec­ta­tions, needs and pref­er­ences should be con­sid­ered and pre­cisely mea­sured. Hence, fur­ther efforts ded­i­cated to assess­ing the demand side seems con­ve­nient.

The lack of demand side per­spec­tive in the assess­ment process of open gov­ern­ment is also clear from the ini­tia­tives ana­lyzed. The sup­ply side is the focus of assess­ment in inter­na­tional efforts such as ODRA, ODB, and GODI, which assess, eval­u­ate, and rank open gov­ern­ment data glob­ally. In these stud­ies, the assess­ment was mainly based on the avail­abil­ity and acces­si­bil­ity of data. While this is impor­tant, the fact is that data on its own have no value. The added value of open data comes when peo­ple became aware, able to access, and use it. Hence, devel­op­ing an assess­ment instru­ment that takes in con­sid­er­a­tion the actual data use from users’ per­spec­tives (i.e., cit­i­zens, pri­vate or busi­ness sec­tor, civil soci­ety orga­ni­za­tions, aca­d­e­mics, and researchers) is needed.

It is worth men­tion that, while the inter­na­tional bench­marks focus basi­cally on the sup­ply side, there is evi­dence that, at an inter­nal level, coun­tries tend to con­sider the demand side in their assess­ments, as noticed in Nor­way, India, Ger­many, and Latvia.

2) Cap­tur­ing EGOV ini­tia­tives impact

As men­tioned above, Estevez and Janowski [2013] define EGOV asthe appli­ca­tion of tech­nol­ogy by gov­ern­ment to trans­form itself and its inter­ac­tions with cus­tomers, in order to cre­ate impact on the soci­ety”. Cer­tainly, by this view, EGOV ini­tia­tive is not an end into itself but rather a means to achieve goals and to cause wide-range soci­etal impacts. Thus, the assess­ment process should not only focus on EGOV adop­tion and usage but also on the impact it may cause at social, eco­nomic, envi­ron­men­tal, and polit­i­cal lev­els in terms of mak­ing gov­ern­ment more par­tic­i­pa­tory, improv­ing effi­ciency and effec­tive­ness of gov­ern­ment oper­a­tions, rein­forc­ing eco­nomic sta­tus, and increas­ing democ­racy. Over­all, link­ing EGOV assess­ment with the real­iza­tion of the SDGs would be a sig­nif­i­cant approach. Accord­ingly, more assess­ment efforts should focus on eval­u­at­ing the prospected impact from that usage of EGOV activ­i­ties either from demand side by the tar­geted stake­hold­ers (i.e., cit­i­zens and firms) or from sup­ply side by experts, for exam­ple.

3) Mul­ti­plic­ity of stake­hold­ers and diver­sity of assess­ment tech­niques

The study sug­gests that, whether the EGOV assess­ment process focuses on sup­ply or demand sides, it has to con­sider var­i­ous eval­u­a­tion meth­ods and allow var­i­ous stake­hold­ers to con­trib­ute to the assess­ment ini­tia­tive. From the sup­ply side, the inclu­sion of e-Gov­ern­ment experts (e.g., aca­d­e­mics, researchers) to assess the appro­pri­ate­ness and the qual­ity of EGOV activ­i­ties offered by a coun­try would give sig­nif­i­cant insights. The experts prob­a­bly have more advanced expe­ri­ence that allows them to per­form sub­jec­tive eval­u­a­tion and sug­gest fur­ther improve­ments. From demand side, experts, busi­nesses, and non-gov­ern­ment orga­ni­za­tion, besides cit­i­zens, should be con­sid­ered when assess­ing the qual­ity of EGOV. Latvia con­sti­tutes a good exam­ple at this level, by con­sid­er­ing experts’ and busi­nesses’ per­spec­tives in assess­ing the qual­ity of e-Gov­ern­ment ser­vices. Addi­tion­ally, while sur­veys and quan­ti­ta­tive meth­ods are the most fre­quent assess­ment method­olo­gies, qual­i­ta­tive approach includ­ing, for exam­ple, inter­views, focus groups and experts’ pan­els seem sig­nif­i­cant to advance EGOV assess­ment and pro­pose enhance­ment prac­tices. Adopt­ing qual­i­ta­tive assess­ment tech­niques allow gov­ern­ments to receive not only more feed­back on imple­men­ta­tion, impact, and over­all eval­u­a­tion of EGOV ini­tia­tives, but also to receive a dif­fer­ent kind of feed­back, that may be very use­ful for under­stand­ing the rea­sons and jus­ti­fi­ca­tions beyond cer­tain behav­iors and results achieved or not achieve with EGOV. OECD recent reviews on dig­i­tal gov­ern­ment in Nor­way, for exam­ple, com­ple­ment quan­ti­ta­tive approach with inter­views of Nor­we­gian stake­hold­ers from the pub­lic sec­tor. In fact, OECD stress the impor­tance of com­bine quan­ti­ta­tive and qual­i­ta­tive mea­sures to assess the sup­ply side of EGOV [OECD 2017].

4) Tar­get­ing dis­ad­van­taged and vul­ner­a­ble groups

As EGOV imple­men­ta­tion tar­gets all seg­ments of soci­ety, sim­i­lar­ity EGOV assess­ment should ensure that all seg­ments of soci­ety are offered equal chances to par­tic­i­pate in the assess­ment ini­tia­tives, espe­cially women, youth, peo­ple with dis­abil­i­ties, peo­ple liv­ing in poverty, and peo­ple who live in rural areas. In fact, this rein­forces efforts for bridg­ing dig­i­tal divide and real­iz­ing gen­der equal­ity.

Accord­ing to the 2013 ITU Report, on Mea­sur­ing the Infor­ma­tion Soci­ety, dig­i­tal divide gen­er­ally reflects the gap among var­i­ous social groups abil­ity to access and use tech­nol­ogy, typ­i­cally by gen­der, socio-eco­nomic lev­els, race, and geo­graphic areas. Fur­ther, the SDGs under­lined in the UN 2030 agenda empha­size the needs for real­iz­ing gen­der equal­ity and the empow­er­ment of women and girls. Accord­ing to the UN 2030 agenda, those groups must have equal oppor­tu­ni­ties to access and use tech­nol­ogy as men and boys. The UN agenda also stresses that peo­ple who are vul­ner­a­ble must be empow­ered. Vul­ner­a­ble groups include elderly peo­ple, peo­ple with dis­abil­i­ties, migrant work­ers, minor­ity groups and refugees. Fol­low­ing theleav­ing no one behind” prin­ci­ple adopted by the UN for offer­ing suit­able e-ser­vices, the EGOV ini­tia­tives assess­ment process must con­sider and cover all seg­ments of soci­ety with­out dis­tinc­tion of any kind as to race, sex, lan­guage, reli­gion, loca­tion, national or social ori­gin, prop­erty, dis­abil­ity or other sta­tus. In thus, EGOV assess­ment processes that would focus on cit­i­zens’ per­spec­tive should con­sider those vul­ner­a­ble and dis­ad­van­tage groups. It is believed that such approach will offer more insights to rein­force more actions toward respond­ing to their needs and empower them to con­trib­ute to pol­icy mak­ing and pub­lic ser­vices.

7. Conclusions, Limitations and Future Works

This study ana­lyzed and reviewed sev­eral world­wide inter­na­tional and national EGOV assess­ment ini­tia­tives. The analy­sis was based on data obtained from desk­top research and from a mul­ti­lin­gual ques­tion­naire sent to the 193 coun­tries that are part of the list used by the Sta­tis­tics Divi­sion of UNDESA.

The study analy­sis pro­vided a bet­ter sense of the sta­tus quo and progress being made by coun­tries and inter­na­tional orga­ni­za­tions in terms of EGOV assess­ment ini­tia­tives as well as unveiled some major chal­lenges faced in these processes.

The study analy­sis con­firms the com­plex­ity and inter­dis­ci­pli­nar­ity of the assess­ment process of EGOV ini­tia­tives that results from the com­plex­ity of EGOV in itself. That is, EGOV con­tains sev­eral aspects, involves sev­eral stake­hold­ers as well as involves com­plex rela­tion­ships between tech­no­log­i­cal, human, legal, reg­u­la­tory, and admin­is­tra­tive dimen­sions. Fur­ther, each EGOV ini­tia­tive has its own tar­get groups, and dif­fer­ent prospected impacts depend on the con­text and the over­all goal of the ini­tia­tive.

The find­ings stress the cru­cial need for cre­ation of an effec­tive frame­work/instru­ment for assess­ing EGOV devel­op­ment and achieve­ments from demand side con­sid­er­ing var­i­ous EGOV stake­hold­ers’ per­spec­tives (e.g., cit­i­zens, busi­ness sec­tor, and civil soci­ety orga­ni­za­tions). Addi­tion­ally, find­ings also empha­size the need to con­sider vul­ner­a­ble and dis­ad­van­tage seg­ments of pop­u­la­tion (i.e., women, peo­ple in rural area, and peo­ple with dis­abil­i­ties) in the assess­ment processes.

As it is widely acknowl­edged, exploratory research is not typ­i­cally intended to pro­vide con­clu­sive evi­dences, gen­er­al­iz­able to the pop­u­la­tion at large. Instead it mainly aims to pro­vide a gen­eral insight into a given prob­lem/sit­u­a­tion [Saun­ders et al. 2011; Zik­mund et al. 2013]. Even con­sid­er­ing this, and despite the find­ings reached, this study has inevitably lim­i­ta­tions. The reduced num­ber of coun­tries par­tic­i­pat­ing in the ques­tion­naire (18 coun­tries of 193 enquired), as well as the quite lim­ited num­ber of inter­na­tional and par­tic­u­larly national EGOV assess­ment ini­tia­tives reviewed, may weaken the find­ings achieved. We believe that addi­tional EGOV assess­ment ini­tia­tives, not reported in this paper, are being con­ducted in other coun­tries. There­fore, the results pre­sented can be essen­tially seen as indica­tive, since the ini­tia­tives ana­lyzed may not be nec­es­sar­ily rep­re­sen­ta­tive for the world as a whole. Nev­er­the­less, the infor­ma­tion pro­vided is effec­tive in lay­ing as an essen­tial first step and ground­work for later stud­ies to be con­ducted.

Accord­ingly, the next logic step for this study will be to pur­sue the iden­ti­fi­ca­tion and analy­sis of new national EGOV assess­ment ini­tia­tives, so that we can develop a more com­pre­hen­sive per­spec­tive of who is mea­sur­ing EGOV, what is being assessed, and how EGOV is being assessed. Later, a set of focus groups and expert inter­views with dif­fer­ent stake­hold­ers will be con­ducted, with the aim of iden­ti­fy­ing a set of mea­sure­ments or assess­ments that rel­e­vant stake­hold­ers con­sider should exist. Grounded on all these con­tri­bu­tions, a frame­work for EGOV assess­ment will be pro­posed and spe­cific assess­ment instru­ments to mea­sure and assess EGOV, in dif­fer­ent per­spec­tives and at dif­fer­ent lev­els of gov­ern­ment, will be devel­oped under the scope of this frame­work.

Acknowledgments

This paper is a result of the projectSmartE­GOV: Har­ness­ing EGOV for Smart Gov­er­nance (Foun­da­tions, meth­ods, Tools)/NORTE-01-0145-FEDER-000037”, sup­ported by Norte Por­tu­gal Regional Oper­a­tional Pro­gramme (NORTE 2020), under the POR­TU­GAL 2020 Part­ner­ship Agree­ment, through the Euro­pean Regional Devel­op­ment Fund (ERDF).

References