Texto completo
Vista Previa |
PDF (Portable Document Format)
- Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (1MB) | Vista Previa |
| Título: | Testing of Android testing tools: development of a benchmark for the evaluation |
|---|---|
| Autor/es: |
|
| Director/es: |
|
| Tipo de Documento: | Tesis (Master) |
| Título del máster: | Ingeniería del Software |
| Fecha: | Julio 2015 |
| Materias: | |
| ODS: | |
| Escuela: | E.T.S. de Ingenieros Informáticos (UPM) |
| Departamento: | Lenguajes y Sistemas Informáticos e Ingeniería del Software |
| Licencias Creative Commons: | Reconocimiento - Sin obra derivada |
Vista Previa |
PDF (Portable Document Format)
- Se necesita un visor de ficheros PDF, como GSview, Xpdf o Adobe Acrobat Reader
Descargar (1MB) | Vista Previa |
With
the
ever
growing
trend
of
smart
phones
and
tablets,
Android
is
becoming
more
and
more
popular
everyday.
With
more
than
one
billion
active
users
i
to
date,
Android
is
the
leading
technology
in
smart
phone
arena.
In
addition
to
that,
Android
also
runs
on
Android
TV,
Android
smart
watches
and
cars.
Therefore,
in
recent
years,
Android
applications
have
become
one
of
the
major
development
sectors
in
software
industry.
As
of
mid
2013,
the
number
of
published
applications
on
Google
Play
had
exceeded
one
million
and
the
cumulative
number
of
downloads
was
more
than
50
billionii.
A
2013
survey
also
revealed
that
71%
of
the
mobile
application
developers
work
on
developing
Android
applicationsiii.
Considering
this
size
of
Android
applications,
it
is
quite
evident
that
people
rely
on
these
applications
on
a
daily
basis
for
the
completion
of
simple
tasks
like
keeping
track
of
weather
to
rather
complex
tasks
like
managing
one’s
bank
accounts.
Hence,
like
every
other
kind
of
code,
Android
code
also
needs
to
be
verified
in
order
to
work
properly
and
achieve
a
certain
confidence
level.
Because
of
the
gigantic
size
of
the
number
of
applications,
it
becomes
really
hard
to
manually
test
Android
applications
specially
when
it
has
to
be
verified
for
various
versions
of
the
OS
and
also,
various
device
configurations
such
as
different
screen
sizes
and
different
hardware
availability.
Hence,
recently
there
has
been
a
lot
of
work
on
developing
different
testing
methods
for
Android
applications
in
Computer
Science
fraternity.
The
model
of
Android
attracts
researchers
because
of
its
open
source
nature.
It
makes
the
whole
research
model
more
streamlined
when
the
code
for
both,
application
and
the
platform
are
readily
available
to
analyze.
And
hence,
there
has
been
a
great
deal
of
research
in
testing
and
static
analysis
of
Android
applications.
A
great
deal
of
this
research
has
been
focused
on
the
input
test
generation
for
Android
applications.
Hence,
there
are
a
several
testing
tools
available
now,
which
focus
on
automatic
generation
of
test
cases
for
Android
applications.
These
tools
differ
with
one
another
on
the
basis
of
their
strategies
and
heuristics
used
for
this
generation
of
test
cases.
But
there
is
still
very
little
work
done
on
the
comparison
of
these
testing
tools
and
the
strategies
they
use.
Recently,
some
research
work
has
been
carried
outiv
in
this
regard
that
compared
the
performance
of
various
available
tools
with
respect
to
their
respective
code
coverage,
fault
detection,
ability
to
work
on
multiple
platforms
and
their
ease
of
use.
It
was
done,
by
running
these
tools
on
a
total
of
60
real
world
Android
applications.
The
results
of
this
research
showed
that
although
effective,
these
strategies
being
used
by
the
tools,
also
face
limitations
and
hence,
have
room
for
improvement.
The
purpose
of
this
thesis
is
to
extend
this
research
into
a
more
specific
and
attribute-‐
oriented
way.
Attributes
refer
to
the
tasks
that
can
be
completed
using
the
Android
platform.
It
can
be
anything
ranging
from
a
basic
system
call
for
receiving
an
SMS
to
more
complex
tasks
like
sending
the
user
to
another
application
from
the
current
one.
The
idea
is
to
develop
a
benchmark
for
Android
testing
tools,
which
is
based
on
the
performance
related
to
these
attributes.
This
will
allow
the
comparison
of
these
tools
with
respect
to
these
attributes.
For
example,
if
there
is
an
application
that
plays
some
audio
file,
will
the
testing
tool
be
able
to
generate
a
test
input
that
will
warrant
the
execution
of
this
audio
file?
Using
multiple
applications
using
different
attributes,
it
can
be
visualized
that
which
testing
tool
is
more
useful
for
which
kinds
of
attributes.
In
this
thesis,
it
was
decided
that
9
attributes
covering
the
basic
nature
of
tasks,
will
be
targeted
for
the
assessment
of
three
testing
tools.
Later
this
can
be
done
for
much
more
attributes
to
compare
even
more
testing
tools.
The
aim
of
this
work
is
to
show
that
this
approach
is
effective
and
can
be
used
on
a
much
larger
scale.
One
of
the
flagship
features
of
this
work,
which
also
differentiates
it
with
the
previous
work,
is
that
the
applications
used,
are
all
specially
made
for
this
research.
The
reason
for
doing
that
is
to
analyze
just
that
specific
attribute
in
isolation,
which
the
application
is
focused
on,
and
not
allow
the
tool
to
get
bottlenecked
by
something
trivial,
which
is
not
the
main
attribute
under
testing.
This
means
9
applications,
each
focused
on
one
specific
attribute.
The
main
contributions
of
this
thesis
are:
A
summary
of
the
three
existing
testing
tools
and
their
respective
techniques
for
automatic
test
input
generation
of
Android
Applications.
•
A
detailed
study
of
the
usage
of
these
testing
tools
using
the
9
applications
specially
designed
and
developed
for
this
study.
• The
analysis
of
the
obtained
results
of
the
study
carried
out.
And
a
comparison
of
the
performance
of
the
selected
tools.
| ID de Registro: | 36898 |
|---|---|
| Identificador DC: | https://oa.upm.es/36898/ |
| Identificador OAI: | oai:oa.upm.es:36898 |
| Depositado por: | Biblioteca Facultad de Informatica |
| Depositado el: | 15 Jul 2015 08:52 |
| Ultima Modificación: | 04 Feb 2016 10:07 |
Publicar en el Archivo Digital desde el Portal Científico