identity protection
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 25)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Yuancheng Li ◽  
Chaohang Yu ◽  
Qingle Wang ◽  
JiangShan Liu

Abstract Nowadays, identity protection has turned into a fundamental demand for online activities. Currently, the present quantum anonymous communication protocols mostly rely on multi-entanglement. In this paper, we propose an anonymous communication protocol for anonymous sender by using single-particle states. The protocol can be extended to a communication protocol where the sender and receiver are fully anonymous with the message kept secret. In terms of security, our protocol is designed to comply with the technique of collective detection. Compared to the step-by-step detection, collective detection, in which the participants perform detection only once, reduces the complexity of the protocol to some extent. Moreover, we analytically demonstrate the security of the protocol in the face of active attacks. Any active attack employed by an external or internal attacker cannot reveal any useful information about the sender’s identity. Meanwhile, any malicious behavior will be detected by honest participants.


2021 ◽  
Author(s):  
Ezequiel Mikulan ◽  
Simone Russo ◽  
Flavia Maria Zauli ◽  
Piergiorgio d'Orio ◽  
Sara Parmigiani ◽  
...  

Deidentifying MRIs constitutes an imperative challenge, as it aims at precluding the possibility of re-identification of a research subject or patient, but at the same time it should preserve as much geometrical information as possible, in order to maximize data reusability and to facilitate interoperability. Although several deidentification methods exist, no comprehensive and comparative evaluation of deidentification performance has been carried out across them. Moreover, the possible ways these methods can compromise subsequent analysis has not been exhaustively tested. To tackle these issues, we developed AnonyMI, a novel MRI deidentification method, implemented as a user-friendly 3D Slicer plugin-in, which aims at providing a balance between identity protection and geometrical preservation. To test these features, we performed two series of analyses on which we compared AnonyMI to other two state-of-the-art methods, to evaluate, at the same time, how efficient they are at deidentifying MRIs and how much they affect subsequent analyses, with particular emphasis on source localization procedures. Our results show that all three methods significantly reduce the re-identification risk but AnonyMI provides the best geometrical conservation. Notably, it also offers several technical advantages such as a user-friendly interface, multiple input-output capabilities, the possibility of being tailored to specific needs, batch processing and efficient visualization for quality assurance.


Author(s):  
Adrienne de Ruiter

AbstractDeepfake technology presents significant ethical challenges. The ability to produce realistic looking and sounding video or audio files of people doing or saying things they did not do or say brings with it unprecedented opportunities for deception. The literature that addresses the ethical implications of deepfakes raises concerns about their potential use for blackmail, intimidation, and sabotage, ideological influencing, and incitement to violence as well as broader implications for trust and accountability. While this literature importantly identifies and signals the potentially far-reaching consequences, less attention is paid to the moral dimensions of deepfake technology and deepfakes themselves. This article will help fill this gap by analysing whether deepfake technology and deepfakes are intrinsically morally wrong, and if so, why. The main argument is that deepfake technology and deepfakes are morally suspect, but not inherently morally wrong. Three factors are central to determining whether a deepfake is morally problematic: (i) whether the deepfaked person(s) would object to the way in which they are represented; (ii) whether the deepfake deceives viewers; and (iii) the intent with which the deepfake was created. The most distinctive aspect that renders deepfakes morally wrong is when they use digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed. Since our image and voice are closely linked to our identity, protection against the manipulation of hyper-realistic digital representations of our image and voice should be considered a fundamental moral right in the age of deepfakes.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Magdalena Wischnewski ◽  
Nicole Krämer
Keyword(s):  

2021 ◽  
Vol 1947 (1) ◽  
pp. 012018
Author(s):  
Navin Kumar Agrawal

2021 ◽  
Vol 29 (3) ◽  
Author(s):  
Yazan Alshboul ◽  
Abdel Al Raoof Bsoul ◽  
Mohammed AL Zamil ◽  
Samer Samarah

Author(s):  
Brian TaeHyuk Keum

Growing number of scholars have noted that racism may thrive and persevere in explicit, blatant forms in the online context. Little research exists on the nature of racism on the Internet. In contributing to this emerging yet understudied issue, the current study conducted an inductive thematic analysis to examine people's attitude toward (a) how the Internet has influenced racism, and (b) how people may experience racism on the Internet. The themes represented in this paper show that the increased anonymity and greater accessibility of the Internet gave platform and identity protection for expressions and aggregation of racist attitudes. Some of the themes explicated positive influences in which people were also able to express and form anti-racist online movements, and confront racist users by taking advantage of the increased anonymity. In terms of how racism was experienced on the Internet, the author identified the following themes: vicarious observation, racist humor, negative racial stereotyping, racist online media, and racist online hate groups. Implications for future research on racism on Internet is discussed.


Sign in / Sign up

Export Citation Format

Share Document