Download New Secure Communication Design for Digital Forensics in Cloud Computing PDF

TitleNew Secure Communication Design for Digital Forensics in Cloud Computing
TagsTypes School Work
File Size840.8 KB
Total Pages10
Document Text Contents
Page 1

(IJCSIS) International Journal of Computer Science and Information Security,  
Vol. 13, No. 2, 2015 

New Secure Communication Design for Digital
Forensics in Cloud Computing

Mahmoud M. Nasreldin

Ain Shams University
Cairo, Egypt

Heba K. Aslan

Electronics Research Institute
Cairo, Egypt

Magdy El-Hennawy
Shorouk Academy

Cairo, Egypt

Adel El-Hennawy
Ain Shams University

Cairo, Egypt

Abstract— Digital forensics experts are facing new challenges in
collecting evidences in cloud computing environment. Eevidences
are often located in data centers that are geographically
separated. Digital forensics experts cannot bear travelling
burden to acquire evidences. Moreover, the volume of hosted
data is so big and the data is so complex. For the evidence to be
admitted in court, evidence collecting process must guarantee
evidence integrity, authenticity, non-repudiation, and
confidentiality. To achieve a secure cloud forensics process,
researchers have proposed many solutions in literature with
major drawbacks in security, high communication, and
computation overheads. Furthermore, received packets should be
analyzed without assuming the availability of the entire original
packet stream. Recently, Sign-Encrypt-Sign and Encrypt-Sign-
Encrypt techniques were used to provide evidence
confidentiality, authenticity, non-repudiation, and integrity. In
this paper, we propose an identity-based signcryption protocol to
reduce the computation, communication, and implementation
overheads in evidence colleting in cloud forensics. Signcryption
protocols have the advantage of achieving the basic goals of
encryption and signature protocols in more efficient way than
Sign-Encrypt-Sign and Encrypt-Sign-Encrypt techniques. Also, a
validation of the proposed protocol using BAN logic is illustrated.

Keywords- Digital Forensics, Cloud Computing, Evidence
Collecting, Authentication, Confidentiality, Signcryption, Identity-
Based Cryptography, BAN Logic.

Cloud computing environment brings attractable

services to users and organizations through efficient digital
solutions with low cost. On the other hand, digital forensics
has an arising need in digital solutions. Digital forensics in
cloud computing (cloud forensics) is a multi-disciplinary
research area that has technical and legal millstones, such as,
chain of custody, acquisition of remote data, big and
distributed data, ownership, and trust. For evidence to be
admitted to court, it has to be authentic with no malleability.
Sometimes, evidence confidentiality is required. Cloud
computing is the future of Information Technology (IT) to
supply organizations’ need and reduce the life cycle cost of
services/equipment. At the same time, cloud computing
environment raises security concerns and demands
modifications to current security solutions that do not consider
cloud in their designs. Cloud computing makes use of the

Internet to provide users and organizations with new services.
NIST describes cloud computing as a set of computing means
such as servers, networks, services and applications that
deliver accessibility, flexibility and extra performance on
demand network-access that comprises of five essential
characteristics, three service models and four deployment
models. Cloud computing brings consistent admission to
distributed resources and it reorganizes the IT domain due to
its availability, scalability, less maintenance cost, data and
service availability assurance, and services provision
infrastructure [1-2]. In cloud computing, there are no fears
regarding over estimation of services that do not comply with
forecasts. Thus, there is no expensive misuse of resources, or
underestimate for one that becomes widespread on large scale.
Cloud computing reduces the possibility of losing customers
and reducing revenue. Moreover, large batch-oriented tasks
can get fast results to comply with programs scalability. Cloud
computing new model consists of facilities that are provided
similarly to utilities, such as gas, water, electricity, and
telephony services. In this model, customers do not care to
identify how the services are provided or where they are
hosted. In cloud computing, the infrastructure is a “Cloud”
from which clients can access applications from anywhere
using on demand methods. Main software industry players
have admitted the importance of cloud computing. Worldwide
definition of cloud computing did not known yet. But,
literature defines the basic principles. Several authors believes
that cloud computing is an extended cluster computing
environment, or more precisely Cloud Computing = Cluster
Computing + Software as a Service [3]. What is relatively
clear is; cloud computing is based on five key characteristics,
three delivery models, and four deployment models.

Cloud computing denotes both the delivered

applications as services over the Internet and the hardware and
systems software in the data centers. The data center hardware
and software form the cloud. The Cloud Computing Service
Model is based on three primary tenants: Infrastructure as a
Service (IaaS), Platform as a Service (PaaS) and Software as a
Service (SaaS). In the SaaS, the application is hosted and
delivered online through a web browser. In Paas, the cloud
provides the software platform for systems. Iaas is a set of

ISSN 1947-5500

Page 2

(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 13, No. 2, 2015

virtualized computing resources. All IT roles, such as security,
storage, applications, networking, and software work in
harmony to provide users with a service based on the client-
server model. There are four deployment models for cloud
services specific requirements [4]:

- Public Cloud: The cloud infrastructure is available to
public or a large industry group. The owner is an
establishment that sells cloud services (e.g. Amazon EC2).

- Private Cloud: The cloud infrastructure is operated
exclusively for a single establishment and might be managed
by the same establishment or a third party (on-premises or off-

- Community Cloud: The cloud infrastructure is shared
by some establishments and supports a specific community
with common interest (e.g., security requirements, mission,
policy, or compliance considerations) and might be managed
by the same establishment or a third party (on-premises or off-
premises) (e.g. academic clouds.)

- Hybrid Cloud: The cloud infrastructure is an
alignment of two or more clouds (private, community, or
public.) It allows data and application portability (e.g., cloud
bursting for load-balancing between clouds) (e.g. Amazon

Cloud computing interact with challenges that might

define the degree of utilization (i.e. data and applications
interoperability, security, data exchange and transfer, business
continuity and service availability, data and applications
interoperability, performance unpredictability, storage
scalability, bugs in large scale distributed systems, scaling
quickly, and software licensing). These five essential
characteristics of cloud computing are on-demand self-service,
ubiquitous network access, rapid elasticity, Location
independent resource pooling and measured service (pay-per-
use). Cloud computing accomplishes efficient utilization of
resources. However, cloud computing protocols do not
provide any mechanisms for providing confidentiality or
authenticity of the received messages. The cloud computing
authentication is a serious problem. Authenticity means that
the recipient could verify the identity of the sender and
ensures that the received message comes from the supposed
originator. For cloud computing communication,
authentication is a challenging problem, since it requires the
verification of big data. Cloud computing authentication
protocols must have the following characteristics: it must have
low computation and communication overheads. Researchers
have proposed many solutions in literature. The major
drawback of some of these solutions was the high
communication and computation overheads. Others suffer
from security pitfalls. Due to the rapid development in cloud
computing, numerous challenges in cybercrime investigations
appear. This brings the need for digital forensics professionals
to encompass their expertise in the cloud computing and
digital forensics domains in order to reduce the risks of cloud
security breach. Apart from that, some characteristics of cloud
computing such as lack of well-defined physical
characteristics, different service models, and different

deployment models have created a new setting for cloud
forensics dimensions. Through this paper, we will refer to
digital forensic in non-cloud environment as traditional digital
forensics, the traditional digital forensics require a specific
description to the evidence that will be acquired. This
description should include the physical descriptions which are
size, media type, the evidence interfaces, and file system
format that will be acquired. Digital forensics (computer
forensics) is the use of scientific methods for the
identification, preservation, extraction and documentation of
digital evidence derived from digital sources to enable
successful prosecution. The objective of digital forensics is to
enhance and acquire legal evidence that is found in digital
media. The current NIST definition of digital forensics is the
scientific procedures used to recognize and classify, collect,
evaluate, and analyze the data while maintaining the level of
integrity of the information throughout the forensics process.
The purposes of digital forensics are including forensic
computing, forensic calculations and computer forensics.
Being called into judicial proceedings is one of the digital
forensics risks. Thus it must have a correct procedure in
conducting the forensic investigation and doing the inspection
setup where this procedure or methodology must basically
base on the scientific principles [5].

Distributed big data cannot use disk cloning to collect

evidence in the cloud. Moreover, shared hosts comprise both
suspicious data that is related to the cybercrime and sensitive
non-related data. To enhance the cybercrime investigation and
protect data confidentiality/privacy of irrelevant users in cloud
forensic, Hou et al. [6-8] and Nasreldin et al. [9] proposed
several solutions to protect the authenticity and integrity of the
collected evidence. It is essential to have a well-thought-out
way of proper handling of evidence in order to minimize
errors in investigations. This well-thought-out way is known
as the digital forensic process. Moreover, for the
trustworthiness of evidence, the digital forensic investigators
are typically requested to clarify the process they used in
gathering evidence in a court of law. This means that the
digital forensic investigator should always know the digital
forensic process and the suitable toolsets used in a digital
forensic investigation [10-11]. The digital forensic process can
be classified into four phases namely acquisition, examination,
analysis and reporting. This process is well known in mobile
and network forensics fields. The acquisition phase defines
how data will be acquired from different types of digital
information sources. Data has to be acquired in a way that
maintains its integrity and authenticity. The acquired data has
to experience forensic duplication or sector level duplication.
A write blocker should be used in building duplicates. The
write blocker guarantees that nothing is written to the original
evidence. Software imaging tools can also be used. Imaging
could be a physical image (bit-for-bit image) that is created of
the entire physical device or a logical image that is created
from active directories and files available to the operating
system. Hash function is used to verify the integrity of
acquired data. Digital hash conducts a mathematical algorithm

ISSN 1947-5500

Page 3

(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 13, No. 2, 2015

to provide a fingerprint that authenticates that the data has not
been tampered with or altered. This fingerprint is maintained
within the case file. Several studies that focus on technical
issues, challenges and the opportunities have been done, but
more research is needed to find effective methods to evaluate
the uncertainty of the evidence or any forensic findings in the
cloud forensics processes. Forensic investigators need to
update themselves in multiple disciplines of knowledge in
order to investigate the digital evidence in a cloud
environment. In particular, they need to acquire high level of
knowledge in specific areas such as mobile, hard disk, registry
and others that can be considered as legal evidence in court. In
order to enhance the digital forensics process in cloud
computing, basic framework and architecture are needed [12-

Cryptography offers effective techniques to ensure

users’ security and privacy in an efficient way. To protect the
cloud computing environment from intruders/attackers and
transmit evidence over an insecure channel, encryption and
digital signature algorithms could be used within different
designs to provide secure networks and security solutions in
order to protect users’ information and their data from being
attacked. In a previous work [9], we presented a security
mitigation to fix the scheme proposed by Hou et al. to verify
data authenticity and integrity in server-aided confidential
forensic investigation [8]. In this paper, we deploy the
signcryption technique that solves the problem of
communication, computation, implementation overheads. The
proposed protocol makes use of identity-based cryptography
to overcome the Public Key Infrastructure (PKI) problems.
The deployment of PKIs has many disadvantages such as high
storage cost, large bandwidth requirement, non-transparency
to users, and the need for certificate revocation lists (CRLs).
Finally, a verification of our proposed protocol using BAN
logic is performed. The remainder of this paper is organized as
follows. In the next section, we briefly review the fundamental
and technical background of cloud forensics, signcryption, and
identity-based cryptography. In section 3, we elaborate on the
computational number theory problems related to the security
of the proposed protocol. Then, a detailed description of the
proposed identity-based signcryption protocol is given in
Section 4. The security analysis of the proposed protocol is
included in Section 5. The verification of the proposed
protocol is discussed in Section 6. Finally, we conclude in
Section 7.


A. Cloud Forensics
Cloud computing allows establishments to make use of

high scalable infrastructure resources, pay-per-use service,
and low-cost on-demand computing. Clouds attract various
establishments. However, the security and trustworthiness of
cloud infrastructure has become a growing concern. Clouds
can be a destination of attacks or a source to launch attacks.
Malicious individuals can simply abuse the power of cloud
computing and manipulate attacks from nodes/hosts inside the

cloud. Most of these attacks are original and exclusive to
clouds. Many characteristics of cloud computing make the
cloud forensics process complex. In cloud computing, the
storage system is not local [15]. Moreover, law enforcement
agents cannot seize the suspect’s computer/digital device in
order to get access to the digital evidence, even with summon
to appear. In the cloud, each server/host encompasses files
from many users. Therefore, it is not easy to confiscate
servers/hosts from a data center without violating the privacy
of other users. Furthermore, when identifying data that
belongs to a particular suspect, it is difficult to separate it from
other users’ data. There is no standard way, other than the
cloud provider’s word, to link given evidence to a particular
suspect. So, the credibility of the evidence is also doubtful

In traditional digital forensics, investigators have

physical access and full control over the evidence (e.g.,
process logs, router logs, and hard disks). Unfortunately, in
cloud digital forensics case, the control over data diverges in
different service models. There are different levels of control
of customers’ data for the three different service models (i.e.
Infrastructure as a Service (IaaS), Platform as a Service
(PaaS), and Software as a Service (SaaS)). Cloud users have
highest control in IaaS and least control in SaaS. Thus, lack of
physical access of the evidence and absence of control over
the system make evidence acquisition a challenging task in the
cloud environment. In the cloud computing environment, the
source of the evidence is ubiquitously and the connection to
the source is complicated. Furthermore, the investigators have
to hire others (inside/outside the country.) Unlike copying a
file from one folder to another folder, the processes of
retrieving the evidence in cloud storage is complex. Usually, it
costs a lot of time and money in parallel to the investigation
time. Investigators have to determine the computational
structure, attribution of data, and the integrity of the data.
Also, investigators have to keep the stability of evidence and
present/visualize it [17-18].

There are two different ways to include digital forensic

investigation in could computing. In the first way, considers
the cloud as a tool of the crime. In the second one, the cloud
hosts a service as a target of the crime. In this section, we
elaborate on the inspection of a targeted system of the
forensics investigation exists in the cloud. There are many
technical ways to conduct a forensic examination in cloud
environment. These ways are similar to traditional
examination. In the cloud environment, there are three aspects
to be considered. First, the nature of crime determines the type
of the system (alive or dead) which the forensics process will
be performed on. Second, to determine what took place in the
cloud. Third, the availability of secure channel to collect
evidences over the cloud (i.e. installed collecting client on the
cloud nodes/hosts must deploy digital signature and
encryption algorithms to communicate with imager device.)
Traditional digital forensics has two scenarios of evidence
acquisitions (i.e. live-system/powered-on-system acquisition,

ISSN 1947-5500

Page 4

(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 13, No. 2, 2015

dead-system/powered-off- system acquisition.) In the dead
system, investigators only analyze hard disk images (stored
data without power.) Alive systems have the capability to
analyze more evidences to be acquired than dead systems. For
the same case, more evidences (e.g., running processes) can be
acquired in alive system than the dead system. One advantage
of digital forensics in cloud environment over traditional
digital forensics is that digital forensics in cloud environment
is considered alive system. The cloud has valuable
information and there is a possibility to be partially up, in the
case of compromise. This gives the investigator more files,
connections, and services to be acquired and investigated. The
cloud is totally dead when shutting down the entire cloud.
This possibility is almost impossible and contradicts the basic
idea of cloud environment [19-21]. Trust in the cloud
environment is very important issue. For example, assume that
a computer has been manipulated to plan a murder and if law
enforcement removes the hard drive for imaging. In this case,
law enforcement must trust their hard drive hardware to
correctly read the disk. On the other hand, if law enforcement
run forensic tool on alive computer, they must trust the
integrity of the host operating system in addition to the
hardware. Let us assume that the compromised system is
hosted in the cloud, new layers of trust are introduced. As a
risk mitigation strategy, the forensic investigator should
examine evidence as multiple items, as mentioned before in
the seven acquiring steps. This allows the investigator to
check for inconsistency and to correlate evidence [22-23].

In [8], Hou et al. proposed an “encryption-then-blind

signature with designated verifier” scheme to prove the
authenticity and integrity of the evidence in cloud
environment. Hou et al. aim to improve the investigation
efficiency and protect the privacy of irrelevant users, one
strategy is to let the server administrator search, retrieve and
hand only the relevant data to the investigator, where the
administrator is supposed to be responsible for managing the
data in a secure manner. Due to some special crimes, the
investigator may not want the administrator to know what he
is looking for. In short, it is indispensable to consider how to
protect both confidentiality of investigation and privacy of
irrelevant users in such forensic investigation. For simplicity
of description, Hou et al. refer to this problem as “server-
aided confidential forensic investigation”. When the above-
mentioned relevant data is presented as evidence during a
trial, Hou et al. aim to realize that the administrator (or the
third party the administrator trusts) can verify whether the
presented evidence is the data that comes from the server and
whether the evidence is altered or not.

B. Signcryption
The common approach to achieve both evidence

confidentiality and authenticity is to sign the evidence and
encrypt it with its signature. The sender would sign the
evidence using a digital signature scheme and then encrypt it
with an appropriate encryption algorithm. The signature
would use a private key encryption algorithm, under a

randomly chosen message encryption key. The random
evidence encryption key would then be encrypted using the
recipient’s public key. These are “sign-then-encrypt” or
"encrypt-then-sign" techniques. Encrypt-then-sign is subject to
the plaintext-subsection and text stealing attacks. The
composition of the sign-then-encrypt approach suffers from a
forwarding attack [24-25].

To mitigate these security breaches, Sign-Encrypt-

Sign and Encrypt-Sign-Encrypt techniques is used [9, 26-33].
Sign-Encrypt-Sign and Encrypt-Sign-Encrypt suffers from
computation, implementation, and communication overheads.
The term signcryption was originally introduced and studied
by Zheng in [34] with the primary goal of reaching greater
efficiency than can be accomplished when performing the
signature and encryption operations separately. In spite of
proposing some security arguments, most of the work on
signcryption [34-50] missed formal definitions and analysis.
The signcryption scheme requires one computation for
“encryption” and one inverse computation for
“authentication”, which is of great practical significance in
directly performing long messages, since the major bottleneck
for many public encryption schemes is the excessive
computational overhead of performing these two operations
[26]. Moreover, signcryption schemes must achieve non-
repudiation, which guarantees that the sender of a message
cannot later repudiate that she has sent the message. Namely,
the recipient of a message can convince a third party that the
sender indeed sent the message. It is worth noting that typical
signature schemes provide non-repudiation, since anyone,
who knows only the sender’s public key, can verify the
signature. This is not the case for signcryption, because the
confidentiality property entails that only the recipient can
comprehend the contents of a signcrypted message sent to
him. Nevertheless, it is feasible to accomplish non-repudiation
by other means. Instead of using encryption/signing process,
signcryption can be applied in place of separate encryption
and signing to reduce both communication bandwidth and
computational time overheads. Any authentication scheme for
big data streams should verify the received packets without
assuming the availability of the entire original stream.

C. Identity-Based Cryptography
Public Key Infrastructures (PKIs) [51] bind public

keys to their corresponding digital certificates. This is a
mandatory requirement to provide the authenticity of public
keys that users can trust in order to perform encryption and
signing operations. Unfortunately, the deployment of PKIs has
many disadvantages such as high storage cost, large
bandwidth requirement, non-transparency to users, and the
need for certificate revocation lists (CRLs). In order to bypass
the trust problems encountered in conventional PKIs, in 1984,
Shamir [52] introduced the concept of identity based
cryptography and constructed an id-based signature scheme.
Identity-based cryptography is a type of public-key
cryptography in which the public key of a user is some unique
information about the identity of the user (e.g., an e-mail

ISSN 1947-5500

Page 5

(IJCSIS) International Journal of Computer Science and Information Security,  
Vol. 13, No. 2, 2015 

address, an IP address, or a social security number.) Identity-
based cryptosystems simplify key management and remove
the need of public key certificates as much as possible. This is
due to the fact that the public key is the identity of its owner,
and hence, there is no need to bind users and their public keys
by digital certificates. The only keys that still need to be
certified are the public keys of the trusted authorities (called
the Private Key Generators (PKGs)) that have to generate
private keys associated with users’ identities. Several practical
solutions for Identity-based Signatures (IBS) rapidly appeared
after Shamir’s original paper, but, despite several attempts
[53-57], finding a practical Identity-based Encryption (IBE)
scheme remained an open challenge until 2001. The latter
proposals either  require tamper-proof hardware, expensive
private key generation operations for PKGs or end users who
are assumed not to collude in order to expose the authority’s
master key. The first practical construction came in 2001
when Boneh and Franklin [58] proposed to use bilinear
pairing to construct an elegant identity based encryption
algorithm. Another IBE scheme was also suggested by Cocks
[59]. This second method relies on simpler mathematics but is
much less practical because of the large expansion in the size
of its ciphertext. Many other identity based signature and key
agreement schemes based on pairings were later proposed [60-


In this section, computational number theory problems
related to the security of the proposed protocol are discussed.
The reader is referred to [58, 65, 66] for further details
regarding the definitions below.

A. Elliptic Curve Discrete Logarithm (ECDL)

Let q be a prime power and let Fq denote the finite
field of order q. Let E(Fq) denote a set of points on the elliptic
curve E over a field Fq, and let E(Fq) denote the order of the
group E(Fq). Let P E (Fq) be a point of order p|#E(Fq). The
Elliptic Curve Discrete Logarithm (ECDL) problem is defined
as follows:

Elliptic Curve Discrete Logarithm (ECDL) problem: Given a
point P on the elliptic curve, along with the curve coefficients,
and a point Q = xP, find the integer x, 0 ≤ x ≤ p - 1, such that
Q = xP.

B. Diffie-Hellman Problems

An abstract understanding of bilinear mapping
requires knowledge of Gap Diffie-Hellman groups and
bilinear groups. Gap Diffie-Hellman groups are created from
disjointing computational and decisional Diffie-Hellman
problems. Bilinear groups are based on the existence of a
bilinear map. Let G be an additive cyclic group of prime order
p, and P is its generator. In this group, the well-known Diffie-
Hellman problems carry on as follows [67-69].
Computational Diffie-Hellman (CDH): Given P, aP, Q G,
compute aQ G. An algorithm that solves the computational

Diffie-Hellman problem is a probabilistic polynomial time
Turing machine,that on input P, aP, Q, outputs aQ with non-
negligible probability. The Computational Diffie-Hellman
assumption means that there is no such a probabilistic
polynomial time Turing machine. This assumption is believed
to be true for many cyclic groups, such as the prime subgroup
of the multiplicative group of finite fields [70].
Decisional Diffie-Hellman (DDH): Given P, aP, Q, bQ G,
decide whether a equals b. Quadruples of this form (P, aP, Q,
bQ) are named Diffie-Hellman quadruples.
Gap Diffie-Hellman Groups (GDH): GDH are examples of
gap problems presented in [71]. There are many subgroups of
group Z q that have prime orders, and both the CDH and DDH
assumptions are believed to be held over these groups. The
subgroup G with the prime order p is one of these. However,
on certain elliptic-curve groups, the DDH problem is easy to
solve, whereas CDH is believed to be hard [68]. Such groups
are named Gap Diffie-Hellman (GDH) groups. Hence, if G
belongs to these specific elliptic-curve groups, we call it a Gap
Diffie-Hellman group.

C. Bilinear Maps
Bilinear groups. Until now, there is no known implementable
example of GDH groups except bilinear maps. A bilinear
group is any group that possesses such a map e, and on which
CDH is hard.
Bilinear maps. Assume that G is an additive group and GT is a
multiplicative group such that |G| = |GT | = |p|, where p is a
prime number. P is the generator of G.
Then, the map e : G × G GT is a computable bilinear map if
it satisfies:

1) Computability: There is an efficient algorithm to
compute e (P , Q) for all P , Q G.
2) Bilinearity: for all P , Q G and a, b Z, we have
e (aP , bQ) = e (P , Q)ab.
3) Non-Degeneracy: e (P , P) ≠ 1. In other words, if
P is a generator of G, then e (P, P) generates GT.

Bilinear Diffie-Hellman Problem. The group G is a subgroup
of the additive group of points of an elliptic curve E (Fq). The
group GT is a subgroup of the multiplicative group of finite
field F q and |G| = |GT | = |p|, where p is a prime number.
Let e : G × G GT be a bilinear pairing on (G, GT ). The
bilinear Diffie-Hellman problem (BDHP) is the following:
Given P, aP, bP, cP, compute e(P, P)abc.
Typically, the mapping e : G × G GT will be derived from
either the Weil or the Tate pairing on an elliptic curve over a
finite field. More comprehensive details on GDH groups,
bilinear pairings, and other parameters are defined in [42-46].


Signcryption techniques are intended to simultaneously
accomplish confidentiality, authentication and non-repudiation
to reduce communication and computation overheads. In this
section, we propose an identity-based signcryption protocol to
reduce the computation, communication, and implementation
overheads in evidence colleting in cloud forensics. The

ISSN 1947-5500

Page 6

(IJCSIS) International Journal of Computer Science and Information Security,  
Vol. 13, No. 2, 2015 

proposed protocol is more efficient than all the previously
presented protocol. It allows the recipient (verifier) to restore
the message blocks upon receiving their corresponding
signature blocks. The proposed protocol is perfect for some
application requirements and it fits packet switched networks.
In the proposed protocol, we construct two stages of
verification to ensure that the message has been recovered
efficiently and correctly. The first verification step is to ensure
the integrity and authenticity of the message (e.g., no
modification or substation in the ciphertext ri ). The second
verification step is to ensure that the message Mi is
reconstructed successfully. This stage is useful for public
verification in the case of a dispute takes place. It guarantees
that the proposed protocol satisfies the non-repudiation
property. In order to perform the proposed protocol, the
following parameters must be set.

Setup: The Private Key Generation center (PKG) chooses
a Gap Diffie-Hellman group G1 of prime order q, a
multiplicative group G2 of the same order, and a bilinear map
e : G1 × G1 → G2, together with an arbitrary generator P G1.
Then it chooses a random value s Zq

* as the master secret
key and computes the corresponding public key Ppub = sP. H1
and H2 are two secure cryptographic hash functions, such that
H1 : 0, 1* → G1 and H2 : 0, 1* → Zq

* . The system parameters
(G1, G2, P, Ppub, H1, H2, e, q) and the master secret key is s.

KeyExtract: Given identity ID, PKG computes SID =
sH1(ID) and sends it to the user with identity ID. We define
QID as the public key of the user with identity ID. We assume
that the sender is A with identity IDA. The sender A has public
key QA = H1 (IDA) and secret key SA = sQA. The recipient, B,
has identity IDA. The recipient B has public key QB = H1 (IDB)
and secret key SB = sQB.

When a sender A wants to send a message M to the
recipient B, it divides the stream into blocks, Mi, where Mi


SignCrypt: The sender A, with secret key SA and public
key QA, uses the following steps before sending the
signcrypted message. The sender A chooses a random number
k  Zq* and lets r0= 0. Then, A calculates:

(1) ri = Mi H2 ( ri-1 e (P, QB) k ), for i = 1, 2, 3, …, n

(2) α = H2 (r1, … , rn , e (P, QB) k )

(3) β = H2 (M1, … , Mn , α, e (P, P) k )

(4) γ = β P

(5) θ = β QB

(6) S = β-1 k P - β-1 SA

The sender, A, sends (S, α, γ, θ, r1, … , rn ) to B over a
non-secure channel.

Un-SignCrypt: The recipient, B:

(1) Verifies:  ?      , …  ,  ,   , ·   ,   

(2) Recovers M:

       ·         , ·   ,   ^ 1

(3) Checks : 
(3) ?    , … , , ,   ,     ·     ,     ·


After receiving the sent message, the recipient checks the
signature by comparing to Mi H2 ( ri-1 e (P, QB)

k ). If the
check doesn’t hold, this indicates that the received packets are
modified and must be discarded. On the other hand, if the
check holds, then the recipient recovers message blocks

    ·         , ·   ,   ^ 1 .
Finally, the recipient checks if the message blocks have been
reconstructed correctly by comparing γ to

, … ,  , , , · , · . For Public
verification, the recipient B just needs to reveal (M, S, α, γ, θ).
Then, any verifier can check whether (S, γ) is the sender A’s
signcrypted message by comparing γ to

, … ,  , , , · , · . This equation
links the message M, A’s public key QA, and the signcrypt
quadruple (S, α, γ, θ) together. If the equation holds, the
recipient (verifier), B, concludes that (S, γ) is a valid signcrypt
for the message M by the sender (signer), A. The proposed
protocol provides both confidentiality and authenticity
simultaneously. Therefore, the computation overhead
decreases, this makes the proposed protocol appropriate for
big data applications. To decrease the communication
overhead, which is considered one of the major disadvantages
of using signcryption techniques, we use bilinear pairings and
identity-based cryptography. Bilinear parings and Elliptic
Curve Cryptography (ECC) use smaller key size than RSA
cryptosystems for the same level of security. Moreover,
identity-based cryptography solves the centralization problem
in the PKI that needs continuous updates for the revocation
list in PKI. Furthermore, the proposed protocol eliminates the
need to encrypt the message using each recipient’s public key
and as a result, lowers the communication overhead. Other
advantage of the proposed protocol is that it could resist both
packet loss and pollution attacks with low computation and
communication overheads. It allows the recipient (verifier) to
recover the message blocks upon receiving their
corresponding signature blocks. The scheme is perfect for
some application requirements and it is designed for packet
switched networks. In the next section, the security analysis of
the proposed protocol is detailed.


The security of the proposed protocol is based on the
intractability of reversing the secure cryptographic hash
function and the Elliptic Curve Discrete Logarithm (ECDL)

ISSN 1947-5500

Page 7

(IJCSIS) International Journal of Computer Science and Information Security,  
Vol. 13, No. 2, 2015 

problem. We analyze the security of the proposed protocol as


, β-1 k P - β-1 SA ,  β QB)

  ,  ·   ,

  ,  ·   ,


, ,  ·   ,


             ,      ,    ,    ,  


, ,  ·   ,

This means that the receiver, B, can calculate α as follows:

                             , … ,  , ,

  , … ,  ,   , ·   ,


                           , β-1 k P - β-1 SA ,  β P)

             ,  ·   ,

  ,  ·   ,


  ,   ,  ·   ,

This means that the receiver, B, can calculate γ as follows:

, … , , ,   ,       ·

                           , … , , ,   ,   ·    ,  

Given that the sender, A, generates (using the SignCrypt
algorithm) and sends the signcrypted blocks and signature to
B. The receiver, B, can recover the message M correctly using
the Un-SignCrypt algorithm in the proposed protocol.


The proposed protocol generates (S, α, γ, θ, r1, … , rn ) and β.
The sender A keeps β and sends (S, α, γ, θ, r1, … , rn ) to the
recipient B where γ = β P . Any adversary, who aims to get
β from γ, has to solve the Elliptic Curve Discrete Logarithm
problem. Therefore, neither the recipient B nor any other
adversary can forge the valid signcrypted blocks (S, α, γ, θ, r1,
… , rn ) for any message M in a way that satisfies the
verification of the Un-SignCrypt algorithm. The recipient B

cannot forge (M, S, α, γ, θ) for a verifier such that (M, S, α, γ,
θ) satisfies  , … , , ,   ,   ·    ,   · .
This is due the fact that γ appears as exponent and discrete
logarithm (over elliptic curve), at the same time. Furthermore,
the exponent is a cryptographic hash function that has as
input. Thus, the attacker has to break both the cryptographic
hash function and the discrete logarithm problem over elliptic
curve. The attacker must get the sender (signer) SA’ secrete
key. Only the sender knows this secrete key.


The attacker must get , to recover the message M
from the signcrypt quadruple and signcrypted blocks (S, α, γ,
θ, r1, … , rn ). In the proposed protocol, k is a random number
the kept secret by the sender and , is a random value
that is unknown to the attacker. Without the recipient B’
secrete key, the attacker cannot calculate , . Thus the
proposed protocol preserves the confidentiality property.

Forward Secrecy:

The proposed protocol makes use of the identity-based
cryptography. So, certificate revocation lists (CRLs) problems
do not exist in the proposed protocol. Hence, there is no need
to reveal the secrete key of the sender (singer/encryptor).
Therefore, there is no forward secrecy problem in the
proposed protocol.


In the case of a dispute takes place between the sender and the
recipient over signcrypted blocks, a trusted third party can ask
the recipient B to reveal (M, S, α, γ, θ). Then, the trusted third
party can check whether signcrypted blocks (S, α, γ, θ, r1, … ,
rn ) is generated by the sender A by comparing γ to

, … ,  , , , · , · . This equation
links the message M, A’s public key QA, and the signcrypt
quadruple (S, α, γ, θ) together. If the equation holds, the
trusted third party concludes that (S, γ) is a valid signcrypt for
the message M by the sender (signer), A, to the recipient
(verifier), B. Thus, the non-repudiation property is
accomplished in the proposed protocol.


Authentication protocols are the basis of security in

many distributed systems, and it is therefore essential to
ensure that these protocols function correctly. Unfortunately,
their design has been extremely error prone. Most of the
protocols found in the literature contain redundancies or
security flaws [72]. In [72], M. Burrows et. al proposed a
method that uses the logic to describe the authentication
protocols. They transformed each message into a logical
formula which is an idealized version of the original message.
In this section, a logical analysis of the proposed protocol
using BAN logic is presented. For a successful verification of
the protocol, the belief state of communicating parties should

ISSN 1947-5500

Page 8

(IJCSIS) International Journal of Computer Science and Information Security,  
Vol. 13, No. 2, 2015 

satisfy the protocol goals. We will consider the proposed
protocol is completed between principals A and B, if there is a
data packet “X” which the recipient B believes that it is sent
by the sender (signer), A. Thus, authentication between A and
B will be completed if B ≡| A ≡| X, and B ≡| X, where the
symbol ≡| means believes. First, the basic rules of the BAN
logic are listed below:

- The interpretation rule




The above rule means that if B believes that A once said a
message containing both X and Y, therefore it believes that A
once said each statement separately.

- Message Meaning Rule




⎯⎯ →⎯≡



This means that if B believes that QA is the public key of A,
and B sees a message X signed by SA, this implies that B
believes that A once said X.

- Nonce Verification Rule




The above rule means that if B believes that X is a recent
message and A once said X, therefore it believes that A
believes in X.

- Jurisdiction Rule




This rule means that if B believes that A has jurisdiction over
X, and B believes that A believes in X, then B believes in X.
- Freshness Rule




The above rule means that if B believes in the freshness of X
and Y, therefore it believes in the freshness of each statement
separately. The analysis is undertaken for the message
exchanged between the sender, A, and recipient, B. The
authentication is considered completed between A and B, if
the following goals are achieved:

Goal 1: B ≡| A ≡| ri

Goal 2: B ≡| ri

Where, ri represents the block sent by A. In order to complete
the analysis, the following assumptions are made:

B ≡| A
QA⎯⎯ →⎯             (1)

irAB ⇒≡|                    (2)

γ#|≡B                     (3)

Equation (1) indicates that B believes that QA is the public key
of A. Then, equation (2) indicates that both B believes that A
has jurisdiction over the block sent. Finally, equation (3)
indicates that B believes in the freshness of γ (since it is
changed for each message). After making the assumptions, the
messages transferred in the initial phase are transformed into
logical formulas. Finally, the basic rules of the BAN logic will
be applied to the logical formulas. Following is the
transformation of the proposed protocol into logical formulas:

A ⎯→⎯ B: {

r }{ , , , } (4)

The analysis of the protocol can now be performed. By
applying message meaning rule to equation (4) and using
equation (1), the following can be deduced:

),(|~| γirAB ≡

But, B believes in the freshness of γ (equation (3)). Thus,
applying nonce verification rule, the following is obtained:

B ≡| A ≡| ri (5)

Then, by applying jurisdiction rule using equation (2), the
following is obtained:

B ≡| ri (6)

From equations (5) and (6), one can deduce that the proposed
protocol achieves the goals of authentication without bugs or

As the need for cloud forensics security arises, the need

to reduce the execution time and computation overhead
associated with the execution of cryptographic protocols
increases. In this paper, we propose an identity-based
signcryption protocol to reduce the computation,
communication, and implementation overheads in evidence
colleting in cloud forensics. Signcryption protocols have the

ISSN 1947-5500

Page 9

(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 13, No. 2, 2015

advantage of achieving the basic goals of encryption and
signature protocols in more efficient way than Sign-Encrypt-
Sign and Encrypt-Sign-Encrypt techniques. At the same time,
the proposed protocol does not require the verifier/recipient to
process the signcrypted packets in sequence. The aim of the
proposed protocol is to ensure confidentiality, authenticity and
chain of custody for the digital forensics process in the cloud
in an efficient way. Signcryption protocols allow the
confidential and authentic delivery of evidences to digital
forensic examiners in the cloud computing environment. As
such, it is a very interesting mechanism for digital forensics
applications that deliver streamed big data content over
insecure channels. Utilizing signcryption techniques lowers the
communication and computation overheads. But, due to the
fact that some digital evidences have huge volume of data and
need to be transmitted over the cloud securely, special
signcryption protocols that consider the digital forensics
requirements in the cloud is needed. The proposed protocol
allows the sender to divide the transmitted data into blocks to
overcome the big data problem in cloud evidence acquisition.
The proposed signcryption protocol is based on bilinear
pairings and utilizes the identity-based cryptography. Protocols
that make use of bilinear parings use cryptographic keys with
key-length less than other protocol that do not implement
bilinear pairings. Less key-length means less storage,
computation, and implementation overheads. Identity-based
cryptography provides the proposed protocol with less
communication overhead advantage over protocols that rely on
PKI. As a result, the proposed protocol has a simpler structure
and easier in implementation than non-signcryption techniques.
In addition, the proposed protocol is analyzed using security
analysis and BAN logic to ensure that it achieves the goals of
encryption and digital signature. The analysis shows that it
achieves those goals without bugs or redundancies.

[1] Q. Zhang, L. Cheng, and R. Boutaba, “Cloud computing: state-of-the-art

and research challenges,” Journal of Internet Services and Applications,
vol. 1, 2010, pp. 7-18.

[2] S.P. Abirami and R. Shalini, “Linear Scheduling Strategy for Resource
allocation in Cloud Environment,” International Journal on Cloud
Computing and Architecture, vol. 2, no. 2, 2012, pp. 9-17.

[3] B. Grobauer, T. Walloschek and E. Stöcker, “Understanding Cloud
Computing Vulnerabilities,” IEEE Security & Privacy,vol.9, no.2, 2011,
pp. 50-57.

[4] P. Mell and T. Grance, “The NIST Definition of Cloud Computing,”
NIST, 2011, Special Publication 800-145.

[5] M.A. Caloyannides, N. Memon, and W. Venema, “Digital Forensics,”
IEEE Security & Privacy, vol. 7, no. 2, 2009, pp. 16-17.

[6] S. Hou, T. Uehara, S.M. Yiu, L.C.K. Hui, and K.P. Chow, “Privacy
Preserving Confidential Forensic Investigation for Shared or Remote
Servers,” Seventh International Conference on Intelligent Information
Hiding and Multimedia Signal Processing, 2011, pp. 378-383.

[7] S. Hou, T. Uehara, S.M. Yiu, L.C.K. Hui, and K.P. Chow, “Privacy
Preserving Multiple Keyword Search for Confidential Investigation of
Remote Forensics,” Third International Conference on Multimedia
Information Networking and Security, 2011, pp. 595-599.

[8] S. Hou1, R. Sasaki, T. Uehara, and S. Yiu, “Verifying Data Authenticity
and Integrity in Server-Aided Confidential Forensic Investigation,”
Lecture Notes in Computer Science 7804, Springer, 2013, pp. 312-317.

[9] M. Nasreldin, M. El-Hennawy, H. Aslan, and A. El-Hennawy, “Digital
Forensics Evidence Acquisition and Chain of Custody in Cloud

Computing,” International Journal of Computer Science Issues, vol. 12,
issue 1, no. 1, 2015, pp. 153-160.

[10] K. Kent, S. Chevalier, T. Grance, and H. Dang, “Guide to integrating
forensic techniques into incident response,” NIST, 2006, Special
Publication 800-86.

[11] S. L. Garfinkel, “Digital forensics research: The next 10 years,” Digital
Investigation, Elsevier, vol. 7, 2010, pp. S64-S73.

[12] E. Casey, “Handbook of Digital Forensics and Investigation,” Academic
Press, 2009.

[13] B.D. Carrier, “Basic Digital Forensics Investigation Concepts,”, 2006.

[14] N. Beebe, “Digital Forensic Research: The Good, the Bad and the
Unaddressed,” IFIP Advances in Information and Communication
Technology, 2009, vol. 306, Springer, pp. 17-36.

[15] B. Martini and K.-K. Choo, “Cloud storage forensics: ownCloud as a
case study, Digital Investigation,” vol. 10, no. 4, 2013, pp. 287-299.

[16] K. Ruan, “Cybercrime and Cloud Forensics: Applications for
Investigation Processes,” Information Science Reference, 2013.

[17] A. Saxena, G. Shrivastava, and K. Sharma, “Forensic Investigation in
Cloud Computing Environment,” The International Journal of Forensic
computer Science, vol. 2, 2012, pp. 64-74

[18] K. Ruan, J. Carthy, T. Kechadi, and M. Crosbie, “Cloud forensics: An
overview,” In proceedings of the 7th IFIP International Conference on
Digital Forensics, 2011, pp.16-25.

[19] R. Adams, “The Advanced Data Acquisition Model (ADAM): A process
model for digital forensic practice,” Murdoch University, 2013.

[20] D. Birk and C. Wegener, “Technical issues of forensic investigations in
cloud computing environments,” IEEE Sixth International Workshop on
Systematic Approaches to Digital Forensic Engineering, 2011, pp. 1-10.

[21] J. Vacca, “Computer forensics: computer crime scene investigation,”
Delmar Thomson Learning, 2005.

[22] M. Sudha and M. Monica, “Enhanced security framework to ensure data
security in cloud computing using cryptography,” Advances in
Computer Science and its Applications, vol. 1, no. 1, 2012, pp. 32-37.

[23] K. W. Nafi, T. S. Kar, S. A. Hoque and M. M. A. Hashem, “A newer
user authentication, file encryption and distributed server based cloud
computing security architecture,” International Journal of Advanced
Computer Science and Applications, vol. 3, no. 10, 2012, pp. 181-186.

[24] Y. Zheng and H. Imai, “How to construct efficient signcryption schemes
on elliptic curves,” Information Processing Letters, vol. 68, Elsevier
Inc., 1998, pp. 227-233.

[25] C.P. Schnorr, “Efficient identification and signatures for smart cards,”
Advances in Cryptology - Crypto '89, Springer-Verlag, 1990, Lecture
Notes in Computer Science, nr 435, pp. 239-252.

[26] M. Rasslan and H. Aslan, “On the Security of Two Improved
Authenticated Encryption Schemes,” International Journal of Security
and Networks, vol. 8, no. 4, 2013, pp. 194-199.

[27] G. El-Kabbany, H. Aslan, and M. Rasslan, “An Efficient Pipelined
Technique for Signcryption Algorithms,” International Journal of
Computer Science Issues, vol. 11, issue 1, no. 1, 2014, pp. 67-78.

[28] T.-Y. Wu, T.-T. Tsai and Y.-M. Tseng “A Revocable ID-based
Signcryption Scheme”, Journal of Information Hiding and Multimedia
Signal Processing, ISSN 2073-4212, vol. 3, no. 3, 2012, pp. 240-251.

[29] D.R.L. Brown, “Deniable authentication with RSA and multicasting,”
Cryptology ePrint Archive,, Feb

[30] W. Diffie and M. Hellman, “New directions in cryptography,” IEEE
Trans. Information Theory, vol. it-22, 1976, pp. 472-492.

[31] L. Kohnfelder, “On the siganture reblocking problem in public key
cryptosystems,” Communications of ACM, vol.31, no.19, 1995,

[32] K. Nyberg and R. Rueppel, “Message recovery for signature schemes
based on the discrete logarithm,” Designs, Codes and Cryptography, vol.
7, no. 1-2, 1996, pp. 61-81.

[33] R. Rivest, A. Shamir, and L. Adleman, “A method for obtaining digital
signatures and public-key cryptosystems,” Communications of ACM,
vol.21, no.2, 1978, pp.120-126.

[34] Y. Zheng, “Digital signcryption or how to achieve Cost ( Signature &
Encryption ) << Cost ( Signature ) + Cost ( Encryption ),” Proc. of
CRYPTO’97, LNCS 1294, Springer-Verlag, 1997, pp. 165-179.

ISSN 1947-5500

Similer Documents