RELATÓRIO DE ATIVIDADES DE INICIAÇÃO CIENTÍFICA · de usuários em estudos ou atividades de...

20
Departamento de Informática RELATÓRIO DE ATIVIDADES DE INICIAÇÃO CIENTÍFICA Aluno: Eduardo Tiomno Tolmasquim Orientadora: Clarisse Sieckenius de Souza Co-Orientadora: Ingrid Teixeira Monteiro (Doutoranda do DI) Projeto de Iniciação Científica: “SideTalk e a autoexpressão” (2013-2014) Meu projeto de pesquisa de iniciação científica para 2013-2014 propôs como objetivos: Dar continuidade ao processo de Iniciação Científica, permitindo aprofundar-me na pesquisa científica à qual está ligado o software com que estou trabalhando [desde a primeira bolsa em 2012]; e Desenvolver uma ferramenta para: (a) coletar dados empíricos de uma pesquisa de doutorado do laboratório em que fiz o estágio; e (b) servir de base para a criação de uma documentação ativa de apoio para criadores de diálogos de mediação, contribuindo assim para o desenvolvimento de seu raciocínio computacional. A proposta de projeto aprovada mencionava também riscos da pesquisa, que é qualitativa e fortemente dependente da disponibilidade dos participantes que contribuem voluntariamente para a coleta de dados empírica. Especificamente, o projeto proposto envolvia trabalho com escolas parceiras do projeto Scalable Game Design Brasil [De Souza et al. 2014], cujo objetivo é propor tecnologias de apoio para o ensino-aprendizado de raciocínio computacional em escolas de ensino médio e fundamental. Textualmente, a proposta dizia: “Qualquer projeto que envolva, como este, a participação de usuários em estudos ou atividades de avaliação empírica corre o risco de atrasos de cronograma e correções de curso de projeto”. Segue minha descrição dos resultados, estruturada em tópicos. Como poderá ser visto, foram feitas correções de curso. No entanto, os principais resultados almejados foram atingidos e, além deles, ainda alcançamos outras realizações importantes, não antecipadas na proposta de 2013. Atividades de Formação Científica Um dos objetivos deste projeto era permitir ao candidato aprofundar-se na pesquisa científica à qual estava ligado. Em relação a este objetivo, detalharei a seguir as atividades que realizei nestes 12 meses de estágio de iniciação científica. A. Participação no Projeto SGD-Br A participação no projeto SGD-Br se deu de diversas formas. O projeto tem como ferramenta o PoliFacets (www.serg.inf.puc-rio.br/polifacets), que ajuda a análise de jogos feitos em AgentSheets, entre outras coisas. Na Figura 1 pode-se ver a tela de entrada do sistema. Participei ativamente do desenvolvimento da ferramenta, em atividades de programação em si e também participando das reuniões semanais de gerência de desenvolvimento. Também participei das reuniões semanais do andamento geral do projeto em pesquisa. Em particular, tive a oportunidade de realizar uma atividade de pesquisa em campo em uma das escolas parceiras do Scalable Game Design Brasil. Participei do planejamento da atividade para coleta de dados, de sua aplicação e também da análise dos dados. Outra

Transcript of RELATÓRIO DE ATIVIDADES DE INICIAÇÃO CIENTÍFICA · de usuários em estudos ou atividades de...

Departamento de Informática

RELATÓRIO DE ATIVIDADES DE INICIAÇÃO CIENTÍFICA

Aluno: Eduardo Tiomno Tolmasquim Orientadora: Clarisse Sieckenius de Souza

Co-Orientadora: Ingrid Teixeira Monteiro (Doutoranda do DI)

Projeto de Iniciação Científica: “SideTalk e a autoexpressão” (2013-2014)

Meu projeto de pesquisa de iniciação científica para 2013-2014 propôs como objetivos: Dar continuidade ao processo de Iniciação Científica, permitindo aprofundar-me na

pesquisa científica à qual está ligado o software com que estou trabalhando [desde a primeira bolsa em 2012]; e

Desenvolver uma ferramenta para: (a) coletar dados empíricos de uma pesquisa de doutorado do laboratório em que fiz o estágio; e (b) servir de base para a criação de uma documentação ativa de apoio para criadores de diálogos de mediação, contribuindo assim para o desenvolvimento de seu raciocínio computacional.

A proposta de projeto aprovada mencionava também riscos da pesquisa, que é qualitativa e

fortemente dependente da disponibilidade dos participantes que contribuem voluntariamente para a coleta de dados empírica. Especificamente, o projeto proposto envolvia trabalho com escolas parceiras do projeto Scalable Game Design Brasil [De Souza et al. 2014], cujo objetivo é propor tecnologias de apoio para o ensino-aprendizado de raciocínio computacional em escolas de ensino médio e fundamental. Textualmente, a proposta dizia: “Qualquer projeto que envolva, como este, a participação de usuários em estudos ou atividades de avaliação empírica corre o risco de atrasos de cronograma e correções de curso de projeto”.

Segue minha descrição dos resultados, estruturada em tópicos. Como poderá ser visto, foram feitas

correções de curso. No entanto, os principais resultados almejados foram atingidos e, além deles, ainda alcançamos outras realizações importantes, não antecipadas na proposta de 2013.

Atividades de Formação Científica

Um dos objetivos deste projeto era permitir ao candidato aprofundar-se na pesquisa científica à qual estava ligado. Em relação a este objetivo, detalharei a seguir as atividades que realizei nestes 12 meses de estágio de iniciação científica.

A. Participação no Projeto SGD-Br A participação no projeto SGD-Br se deu de diversas formas. O projeto tem como ferramenta o

PoliFacets (www.serg.inf.puc-rio.br/polifacets), que ajuda a análise de jogos feitos em AgentSheets, entre outras coisas. Na Figura 1 pode-se ver a tela de entrada do sistema. Participei ativamente do desenvolvimento da ferramenta, em atividades de programação em si e também participando das reuniões semanais de gerência de desenvolvimento. Também participei das reuniões semanais do andamento geral do projeto em pesquisa. Em particular, tive a oportunidade de realizar uma atividade de pesquisa em campo em uma das escolas parceiras do Scalable Game Design Brasil. Participei do planejamento da atividade para coleta de dados, de sua aplicação e também da análise dos dados. Outra

Departamento de Informática

atividade importante foi a elaboração e aplicação de um workshop para professores de escolas interessadas.

Figura 1: PoliFacets (tela de entrada)

B. Participação no IHC’2013 Participei do Workshop SERG'2020, que contou com a representação de grupos de pesquisa de

diversas universidades do país. Seus membros são egressos do Grupo de Pesquisa em Engenharia Semiótica (SERG) do Departamento de Informática e colaboradores associados, em sua maioria professores inseridos em programas de pós-graduação onde se faz pesquisa sobre Interação Humano-Computador (IHC). O Workshop teve duas fases, a primeira foi no Rio de Janeiro e a segunda em Manaus. Em Manaus também participei do IHC'2013, XII Simpósio Brasileiro Sobre Fatores Humanos em Sistemas Computacionais. Fui coautor do artigo técnico completo intitulado Going Back and Forth in Metacommunication Threads [Monteiro, I.T et al, 2013], apresentado neste simpósio.

C. Matrícula e Conclusão de Disciplina de Pós-Graduação no Departamento de Informática

Cursei a disciplina de pós-graduação INF2811 – Tópicos de Linguagem de Programação II,

lecionada pela minha orientadora, Clarisse de Souza. Esta disciplina propôs estudar que conceitos cada linguagem de programação voltada para o aprendizado estimula os usuários a assimilarem. Eu fiz um trabalho sobre a linguagem NetLogo. O conteúdo da disciplina, mesmo que indiretamente, ajudou o projeto SGD-Br pois gerou reflexões sobre o ensino de programação.

Departamento de Informática

D. Realização e Coautoria de Pesquisas Associadas ao Doutorado de Ingrid Monteiro

Com o objetivo de coletar dados para a pesquisa da Ingrid Monteiro, participei de um estudo de pesquisa (mencionado anteriormente) com alunos da Escola Americana do Rio de Janeiro (EARJ). O objetivo do estudo era analisar a autoexpressão de crianças ao criarem um script usando a ferramenta SideTalk. Foi escrito um artigo técnico sobre esse estudo e submetido para uma importante conferência internacional de IHC, a Group'2014 – Qualis A2 na Computação. O artigo teve boas revisões, mas não foi aceito por a conferência ser muito concorrida. Mas está sendo reescrito para nova submissão, também para uma conferência de alto nível, o SIGCSE'2015 – Qualis A2 na Computação.

Também dei uma aula, junto com Ingrid Monteiro, para a disciplina de Introdução à IHC (INF1403), voltada para a graduação. A aula teve como objetivo mostrar o que é a pesquisa em geral e em computação. A evolução da ferramenta WNH até virar SideTalk foi usada como exemplo real para ilustrar a mensagem.

Desenvolvimento de uma Ferramenta

O outro objetivo deste projeto, quando proposto em 2013, era o desenvolvimento de uma ferramenta para: (a) coletar dados empíricos de uma pesquisa de doutorado do laboratório em que fiz o estágio; e (b) servir de base para a criação de uma documentação ativa de apoio para criadores de diálogos de mediação, contribuindo assim para o desenvolvimento de seu raciocínio computacional.

A. O SideTalk em coleta de dados empíricos e como ambiente de ensino-aprendizado de

raciocínio computacional A ferramenta desenvolvida hoje se chama SideTalk (sucessor do WNH – Web Navigation Helper),

uma extensão do Firefox que pode ser baixada livremente de: http://www.serg.inf.puc-rio.br/sidetalk Fizemos uma proposta para a ferramenta ser apresentada na sessão de Pôsteres & Demonstrações

do IHC’2014, XIII Simpósio Brasileiro Sobre Fatores Humanos em Sistemas Computacionais (http://www.inf.unioeste.br/ihc14/pt/posteres_demonstracoes.php). O objetivo disso é mostrar o funcionamento da ferramenta e sua importância para a Engenharia Semiótica. Será exibido, com exemplos práticos, como o usuário se engaja em uma tarefa de Engenharia Semiótica, no momento que se expressa através da criação de um objeto computacional. Nas Figuras 2a, 2b e 2c pode-se ver, respectivamente, a interface do SideTalk para a seleção de conversas, a tela de um diálogo em andamento sobre página educativa na Web e, finalmente, a interface para edição de diálogos (com passos da conversa ilustrada na figura 2b).

Departamento de Informática

Figura 2a: Tela de seleção de diálogos disponíveis no SideTalk

Figura 2b: Tela de um diálogo em andamento sobre página educativa na Web

Departamento de Informática

Figura 2c: Interface para edição de diálogos

(com passos da conversa ilustrada na figura 2b, veja passo selecionado) Quanto ao SideTalk servir para coleta de dados empíricos, a ferramenta foi usada com grande

sucesso. Foi realizado um experimento em que alunos de uma escola parceira ao projeto SGD-Br usaram o SideTalk. Os alunos anteriormente haviam criado jogos de computador, por conta própria, usando o programa AgentSheets e os analisado usando o PoliFacets. Foi proposto então que eles criassem um script, usando o SideTalk, para apresentar seu jogo ao professor. Foram coletados muitos dados interessantes, que expuseram a forma dos alunos de entender o que é um software e a forma com que eles se expressam através da criação de um script.

Quanto à finalidade (b), porém (servir de base para atividades de ensino-aprendizado de raciocínio

computacional), houve uma mudança de rumos principalmente porque as escolas parceiras do SGD-Br demonstraram outros interesses e prioridades quanto a outros ambientes com que trabalhar o desenvolvimento do raciocínio computacional (por exemplo, usar outras linguagens de programação e/ou outros ambientes de [criação de] jogos e simulações, diferentes do atualmente usado). No entanto, a professora de uma das turmas que inicia atividade em agosto de 2014 já manifestou expresso entusiasmo e interesse pelo SideTalk, razão pela qual concluiremos o protótipo com aspectos de documentação que foram despriorizados em 2013-2 e 2014-1.

B. O SideTalk como ambiente de ensino-aprendizado de interação humano-computador Ao longo desta pesquisa, o SideTalk foi experimentado como ferramenta didática de apoio ao

ensino-aprendizado de alunos de Graduação do Departamento de Informática, na disciplina de Introdução à Interação-Humano Computador (INF1403 – turma 3WB, professora Clarisse Sieckenius de Souza). A professora, minha orientadora neste projeto, explorou com seus alunos o processo de comunicação através de software, propondo o uso do SideTalk como forma de parafrasear interações entre usuários e sites na Web. Dos resultados desta experiência, compartilhados com outros professores da disciplina e com a coorientadora deste projeto, a doutoranda Ingrid Monteiro, surgiu a ideia de estender a funcionalidade do SideTalk para ele apoiar o ensino da matéria de Modelagem de Interação de acordo com a proposta MoLIC [Barbosa, Simone D. J.; Da Silva, Bruno S, 2010] . A ideia foi muito bem acolhida pelos professores que sugeriram a prototipação deste SideTalk estendido a fim de que seja

Departamento de Informática

trabalhado e ajustado a ponto de tornar-se uma ferramenta de ensino-aprendizado para todos os professores de IHC interessados no assunto. O protótipo atual (em língua inglesa e brevemente também em portuguesa) foi elaborado em Javascript – é apenas uma maquete do que será a ferramenta propriamente dita, que servirá para sucessivas avaliações de uso antes de darmos início à sua implementação final. Ele pode ser baixado de: http://www.serg.inf.puc-rio.br/sidetalk/pesquisa.shtml.

O protótipo é uma ferramenta escrita em Javascript e HTML. O usuário desta ferramenta percorre as cenas de um diagrama MoLIC, fazendo comentários sobre cada cena. Depois, o designer de IHC, que fez o diagrama, pode ler os comentários e saber que percurso exatamente foi feito pelo usuário.

Figura 3: Protótipo do SideTalk para o ensino de MoLIC

(em inglês porque vai ser demonstrado em workshop internacional)

Conclusão

Avalio meus dois anos de iniciação científica como muito proveitosos. Tive oportunidade de conhecer de perto o funcionamento de um grupo de pesquisa. Pude participar tanto de reuniões rotineiras sobre o andamento de algum projeto, quanto da elaboração e aplicação de experimentos para artigos, e também de programação de software. Entendi a relação do software que estava ajudando a construir, com o conhecimento que aquilo podia gerar.

Departamento de Informática

Referências DE SOUZA, C. S.; SALGADO, L. C.; LEITÃO, C. F.; SERRA, M. M. Cultural appropriation of computational thinking acquisition research: seeding fields of diversity.In Proceedings of the 2014 conference on Innovation & technology in computer science education (ITiCSE '14). ACM, New York, NY, USA, 117-122, 2014 MONTEIRO, I.T. ; TOLMASQUIM, E. T. ; DE SOUZA, C. S. (2013) Going back and forth in metacommunication threads. In: IHC'2013 12th Brazilian Symposium on Human Factors in Computing Systems, 2013, Manaus. Proceedings of the 12th Brazilian Symposium on Human Factors in Computing Systems (IHC '13). Porto Alegre: SBC, 2013. pp. 102-11 [Also published by the ACM International Conference Series] BARBOSA, S.D.J.; SILVA, B.S. Interação Humano-Computador. Editora Campus-Elsevier, 2010. Visto e de acordo:

Clarisse S. de Souza Clarisse Sieckenius de Souza Professora Titular

Departamento de Informática

Anexos I. MESSAGE AS MIRROR: SELF-EXPRESSION VIA COMPUTER MEDIATED COMMUNICATION Trabalho submetido ao Group’2014 (Qualis A2) – Uma nova versão está sendo preparada para ser submetida ao SIGCSE’2015 (Qualis A2) Os autores estão anonimizados por exigência de submissão da conferência, mas são: Ingrid Teixeira Monteiro, Eduardo Tiomno Tolmasquim e Clarisse Sieckenius de Souza II. SIDETALK: COMUNICAÇÃO INTERPESSOAL NA/SOBRE A WEB Trabalho submetido aoIHC’2014 (Qualis B4), para a sessão de Pôsteres e Demonstrações Os autores estão anonimizados por exigência de submissão da conferência, mas são: Ingrid Teixeira Monteiro, Eduardo Tiomno Tolmasquim e Clarisse Sieckenius de Souza

Message as mirror: expressing self in scripted computer-

mediated communication 1st author

Line 1 line 2 Line 3

email

2nd author Line 1 line 2 Line 3

email

3rd author Line 1 line 2 Line 3

email

ABSTRACT

This paper reports a study on how users manifest their

individualities while communicating with others through scripted

dialogs. Although self-expression has been studied in computer-

mediated communication in other contexts (e. g. social networks,

blogs and avatars), how subjectivity arises in end user

programming contexts is yet a relatively unexplored topic. We

present a case study on self-expression with SideTalk, a scripted

communication tool for the Web. Results show the richness and

variety of self-expression elements in technology-mediated

discourse produced by a group of middle school students.

Categories and Subject Descriptors

H.4.3 Information Systems Applications: Communications

Applications; H.5.2. Information interfaces and presentation

(e.g., HCI): User Interfaces.

General Terms

Design, Human Factors, Theory.

Keywords

Self-expression, computer as media, SideTalk.

1. INTRODUCTION Although technology has deeply changed communication

practices, the essence of this distinctive human capacity remains

the same. While communicating, people express their thoughts,

intentions, values and needs. These are elements of their self-

constitution, extensively investigated by psychologists and

discourse analysts to-date. In combination with technology, self-

expression has been studied by researchers interested in

computer-mediated communication (e. g. social networks, blogs

and avatars), but the topic has been relatively underexplored in

the context of group work. The attention of this community of

researchers has been more geared towards other topics such as

those discussed in two parallel sessions on CMC held during the

2009 edition of ACM Group (e.g. culture, online communities

and mobile communication).

In this paper we present a case study on self-expression with

SideTalk1 [17] [18], a scripted communication tool for the Web.

Unlike most CMC contexts discussed to-date, our work refers to

asynchronous communication that must be programmed by users

and evolves as interlocutors interact with programmed dialogs.

As will be shown in detail in the following sections, this activity

not only opens a number of new CMC possibilities in group work

contexts, but also – and more importantly for this paper –

demonstrates the interest of investigating the manifestation of

subjectivity through programming (more specifically, end user

programming). Computer languages can be used to communicate

self-expression intentionally and non-intentionally. Our findings

in this area are based on empirical observations of how a group

of middle school students presented their work to their teacher

using scripted SideTalk dialogs. They underline the richness and

variety of self-expression elements in this specific context of

technology-mediated discourse and lead us to propose a few lines

of investigation to stimulate more studies about this topic.

The next two sections provide background information about

SideTalk and previous research on self-expression. In the fourth

section we present our case study, highlighting its methodology

and findings. The final section presents our conclusions and

some suggestions for future work for those interested in self-

expression through scripted computer-mediated communication.

2. SIDETALK SideTalk [17] [18] is a CMC tool implemented as a Firefox

extension. It supports scripted interpersonal communication

referring to one or more Web pages that the browser can open.

The effect of this style of communication is, as the name of the

tool suggests, that of having a parallel conversation about

information and interaction shown on Web pages. All

conversation refers to previously recorded navigation on selected

pages. One example to illustrate SideTalk’s profile and the kinds

of purpose it can be used for, is to think of Web pages from a

touristic Web site, whose language might not be understood by a

group of users (let us call them SideTalk receivers). Receivers

are interested in the Web site’s content, but need mediation to

understand it. One or more people from another group of users

(let us call them senders), who understand the language and are

willing to help the receivers, can record interesting navigation

scripts in the Web site and subsequently design interactions

1 SideTalk was previously named Web Navigation Helper

(WNH), and it will be referred as so in most references.

for/about the scripted navigation in a language that receivers can

understand.

In Figure 1 we show a snapshot of a SideTalk dialog created by a

sender for “Café do Alto”. The entire Web site is in Portuguese,

which makes it difficult for receivers who cannot understand the

language to get the information and interact with the site. Thus, a

sender has scripted a set of navigations and designed parallel

conversation for English-speaking receivers. Notice that

SideTalk appears on the side bar of a Firefox browser’s window,

the referred Web site is displayed on the right side and the

mediation dialog on the left side includes interactions (e.g. links

or buttons at the bottom of the side bar).

Figure 1. A SideTalk conversation sample

To build a SideTalk conversation, senders begin by creating a

script, that is, a sequence of interaction and navigation steps that

are achieved using the regular browser interface. The script

records all performed actions such as, going to a different URL,

clicking on specific buttons or links, marking checkboxes, and

the like. Scripts are created by CoScripter [14], a macro recorder

developed by IBM and embedded into SideTalk.

Once scripts are ready, senders can begin to design and program

their mediation dialogs. The dialogs’ final form (see Figure 1) is

that typically supported by HTML. So, senders can embed

images, sound and interactive controls in it, create new Web

pages, for example, to explain, illustrate, or complement

information, while talking about the original Web site, whose

contents appear on the right side of the screen. The result is a

new interaction for receivers with respect to previously existing

interaction on the Web. Figure 2 sketches the effect of SideTalk.

The yellow balloon message icons represent the dialogs created

by senders, which will be subsequently used by receivers. The

scripted navigation between existing Web pages is represented

by sequential numbers and arrow-headed lines. Notice that one

Web page can have several dialogs associated with it, which

allows for great flexibility in the communication process.

Figure 2. Conceptual schema of SideTalk

SideTalk conversations can serve a number of purposes. In

addition to helping users with certain kinds of accessibility

problems [1] (like interacting with a Web site in a foreign

language, for example), they can be used to make a commentary

or explanation about Web content (helping journalists and

teachers, for example), as well as creating standardized online

processes for groups of users engaged in specific tasks (which is

of particular interest for CSCW and groupware applications in

general).

SideTalk’s senders (the mediation conversations’ authors) are,

as implied above, engaged in designing and developing new

interactive software, computational artifacts that will be used by

receivers in anticipated (and potentially non-anticipated)

contexts. In other words, SideTalk senders are truly engaged in

end user development activities [13] [15] whose purpose is to

achieve computer-mediated communication among people.

SideTalk’s programming interface, still going through iterative

design cycles, is a mixture of programming by demonstration (the

main style adopted by CoScripter), parametric programming (e.

g. by selecting pre-determined typed values from a list or

informing new ones in appropriate input boxes) and textual

scripting (e. g. writing HTML code for dialog content or editing

steps encoded in CoScripter’s scripting language).

An important feature of SideTalk dialogs is that they may (and

actually tend) to be very personal, a feature they share with a

large portion of end user development artifacts [13]. Because

they are not professional large-scale developments, they typically

attend to the needs of personally significant contexts, where

senders have an active motive to communicate with the receivers.

In such situations, self-expression and self-representation

become particularly salient and relevant, in parallel with what is

the case with other kinds of human communication, not mediated

by technology. Problems in communicating the desired

propositional attitude, for instance, can compromise an entire

process of interpersonal interaction. With SideTalk, this touches

not only on the use of emoticons and the awareness of cultural

differences while creating the various dialogs’ content, but also

on the ability to express one’s values regarding, among other

things, sensitivity to one’s interlocutor’s (the receiver) particular

needs and interests while interacting with the scripted mediation.

The interesting research to be done is to find out whether this is

perceived to be so (by senders and receivers) and, in the

affirmative case, supposing that senders don’t mean to be

inconsiderate, to find out whether the end user development tools

with which dialogs are created must be enhanced (and how) to

prevent undesired interpersonal consequences.

3. RELATED WORK ON SELF-

EXPRESSION Self-expression is a phenomenon linked to human

communication that is an object of study in such fields as Social

Communication, Linguistics, Psychology and Education. The

computer as media perspective in Information Systems and

Human-Computer Interaction has also motivated some

researchers [2][7], but the volume of this kind of research is

relatively small, especially when it comes to studying one’s self-

expression as the result of programming.

The pioneering work by Turkle [22] is possibly the ancestor of

much research on self-expression through computer programs.

When studying professional programmers and young learners

(children) alike she found evidence that “the machine can act as

a projection of part of the self, a mirror of the mind.” [22] (p.

20). More specifically, she proposed that “a computer program

is a reflection of its programmer’s mind. If you are the one who

wrote it, then working with it can mean getting to know yourself

differently”. [22] (p. 24). Although her perspective was

sometimes focused on self-knowledge, as implied in the latter

quotation, the implications for interpersonal exchange, especially

in the context discussed in this paper, are clear.

In Linguistics, the work of Johnstone [12] reminds us that people

appropriate natural language in individual ways and that this

very appropriation constitutes and projects the subject’s

personality. “People know each other because each has a unique

way of sounding, an individual voice.” (p. 7). This perspective

can be (and has been) extended to other languages used in

contemporary social environments. For example, Walsh [24]

speaks of new literacies that have emerged in game playing and

how the knowledge required to be a proficient gamer is in

substantial ways parallel to that which traditionally constitutes

literacy, in particular functional literacy (the ability to use

language in order to act in the world, achieve one’s personal and

collective goals, influence others, affect groups, exercise

leadership, etc.). In his words: “Configuring the game is often as

important as playing it, because players configure the game as

they like it (personalization). Afterwards, the machine executes

those choices and this affects gameplay.” [24] (p.28).

Researchers from the CMC field have investigated self-

expression in a large spectrum of contexts. There is work

addressing the phenomenon on social networks [6], chats [23]

and blogs [3]. Other scholars paid attention to less common CMC

features like the use of avatars [21], mobile communication [5],

3D environments [4] and even wearable computing [16].

The study of self-expression has been more intense in the context

of social networks. For example, Hwang and coauthors [11]

discuss how Flickr users promote themselves in this community.

They analyze the strategies used to make their photos – and thus

to become themselves, as photographers – more popular in the

eye of others. DiMicco and Millen [9] investigate how Facebook

users present themselves while managing double identities: one

personal and the other professional. The authors are interested in

finding what the users do in order to emphasize or deemphasize

one identity or the other. The outcome of their study is a

framework describing three different personas to characterize

people who manage these two identities.

As mentioned before, our study examines a different context of

CMC, which is nevertheless –in our view– an increasingly

important one. Similarly to the ideas put forth by Walsh [24], we

see that current technologies have been improving customization

and extension features and that ‘personalizing’ certain kinds of

software to make them maximally efficient for use is not only a

linguistic activity per se, but also one whose goal is many times

to affect others and to create certain kinds of desired

representations of self. The more or less sophisticated

programming skills required in software customization and

extension activities are not confined to strictly personal context.

For a discussion of how configuration tasks can affect

interpersonal relations online, see for example [8].

In the next section we present the details of a case study with six

middle school students, who used SideTalk to present their game

projects to their teacher, in asynchronous computer-mediated

scripted communication.

4. CASE STUDY This case study was carried out with six middle school students

at the end of a 4-month Computational Thinking Acquisition

(CTA) program at an international school in Rio de Janeiro.

During the program, a group of 30 students learned to design and

program games using <CTA_SYSTEM>. The small group of

participants was self-selected (volunteers). Our aim was to see if

and how they expressed individual personality traits in designing

asynchronous communication with their teacher using SideTalk.

This program is part of <CTA_PROJECT>2 [20] and is the

Brazilian branch of <ORIGINAL_CTA> [19], a project created

and carried out by a multidisciplinary team at the University of

Colorado in Boulder. The goal of the project in both countries is

promoting the teaching and learning of computational thinking in

middle and high schools, through the design of games and

simulations. <CTA_PROJECT> started in 2010, with a small

number of partner schools, two national (one public and one

private) and one international (American) school. In its initial

phase, one of the main goals of the project is to design support

technology for the project to scale up in coming years. Also,

<CTA_PROJECT> places an emphasis on stimulating

communication and gaining fluency in working with multiple

representations for intended meanings. As a result,

<CTA_PROJECT> has developed a live documentation system,

<DOC_SYSTEM>. Teachers and students can use it to explore

various programming representations that contribute to the final

gameplay experience communicated by its creator. In our study,

<DOC_SYSTEM> was used as a basis for the proposed activity

(detailed below), since it is a Web-based technology and well

fitted for a study with SideTalk.

4.1 Participants and Methodology The six participants were 8th-grade students from the

<CTA_PROJECT> international partner school. Being an

American school, its official language is English and they follow

USA curricula and other educational practices. Three boys and

three girls, 12 years old on average, volunteered to participate in

the study. Their national origin is shown in Table 1.

Table 1. Participants' profile

P1 P2 P3 P4 P5 P6

Gender Male Male Female Female Male Female

Origin USA Spain Spain USA Arab

Emirates Israel

With the consent of the class’s teacher and students’ parents, we

invited these students to take part in one activity with SideTalk.

All of them were familiar with <DOC_SYSTEM>, which they

had used in class mainly to share their games with the teacher

and classmates. The six participants had uploaded between 3 and

5 projects to the system. So, we proposed them an activity

scenario, which is shown in Frame 1. The actual teacher’s name

was used in it, but has been replaced here by “Mr. T”. Also,

none of the students had ever seen SideTalk.

2 Words inside <> indicates names omitted for blind review.

Mr. T is travelling and will not be able to be with you in the last class.

However the <CTA_PROJECT> team suggested to Mr. T that he uses

SideTalk for communication with his students. SideTalk is a tool that

supports conversation about web pages through some special dialogs.

Now, your task is to report to Mr. T what you have done with

<CTA_SYSTEM>. Please, choose one of your projects and create a

conversation using SideTalk to show some interesting pages related to

your game in <DOC_SYSTEM>.

Please include in your conversation the following facets: “In practice”

(game applet), “Description” and a third facet of your own choice.

Important: You are allowed to write in Portuguese, English or Spanish.

Frame 1. Proposed scenario

The study started with students watching a short oral

introduction about SideTalk. In it, the fictitious main character

creates a similar computer-mediated conversation, showing one

of his projects to his teacher. Next, the participants were taught

(by a research team member) how to create a navigation script

and its corresponding dialogs in SideTalk. Having gained some

familiarity with the technology, participants were given time to

navigate through <DOC_SYSTEM> and decide which game they

wanted to present to their teacher, and which <DOC_SYSTEM>

features they thought would be important. We suggested that

they register their conversation plan in a notepad file, so they

could use it while creating their script and dialogs. The most

time-consuming phase was that of programming asynchronous

dialogs in SideTalk. In the end, we asked participants to answer

open-ended questions in an online Web form. Questions had to

do with their personal profile (including nationality and mother

tongue) and experience using SideTalk during the whole activity.

The case study was performed in two 60-minutes sessions, with

three participants each.

Given the novelty of the task, the tool and the technology in our

research, we adopted a qualitative approach. This seemed

especially appropriate in a situation where we might not know all

the factors influencing the participants’ performance. Likewise,

it helped us gain deeper understanding of the phenomenon itself

before we begin to raise hypotheses and collect measurements.

Empirical evidence came from the following sources: audio and

video recordings of participants’ activities; screen captures of all

interactions with computer technologies (use of SideTalk, Web

browsing, form filling, etc.); notepad files with conversation

plans; the resulting SideTalk conversations; filled up Web forms;

and researchers’ field notes.

Figure 3 shows one dialog extracted from P6’s SideTalk

conversation. In Firefox’s left-hand sidebar, the dialog is opened.

It talks about the game’s instruction that is highlighted in green

in <DOC_SYSTEM>, on the right-hand side, where there is a

menu presenting all available facets. The Web page opened in

the browser corresponds to the “Description” facet.

Figure 3. Talking about Sokoban

4.2 Data Analysis Results are organized by type to facilitate the discussion. One of

them (Dialog Analysis) is further categorized into 5 groups of

meaningful findings.

4.2.1 Numeric indicators In a qualitative investigation it does not make sense speak about

statistics. However, some numbers can help us understand

aspects of the participants’ behavior. In Table 2 “Steps” indicate

how many script commands were recorded to compose the

conversation. The most common commands are: “go to”

(indication of a URL), “click” (clicking on links or buttons) and

“clip” (highlighting of an element, without activating it).

“Dialogs” is the count of dialogs created for the conversation. It

should be noted that all SideTalk conversations must have a

welcome dialog and a closing dialog.

The table shows that participants created from 5 to 10 dialogs to

talk with Mr. T. There is no clear correspondence between the

number of steps and dialogs. One can be greater than the other

and vice-versa, or else they can be the same.

“Web pages” corresponds to each new page Mr. T will see

during the conversation. It ranged from 3 to 6, and four

participants out six presented 3 Web pages in the conversation.

Looking at the number of steps, we can see that there is no 1:1

relation. In general we have much more steps than Web pages,

meaning that participants spent time on the same Web page.

Also, participants developed more than one dialog to talk about

some pages.

Table 2. Numerical indicators

P1 P2 P3 P4 P5 P6

Steps 8 9 11 7 5 10

Dialogs 9 10 8 7 5 8

Web pages 6 3 4 3 3 3

Words (average) 10.22 31.40 45.25 11.86 15.80 40.63

Topics 9 6 6 5 4 5

“Words” is the average of words per dialog. Taking only the

extremes, P1’s average was about 10 words per dialog and P3

was about 45, showing the wide range of text styles.

The “Topics” indicator corresponds to what participants talked

about inside the dialogs. All of them introduced and ended the

conversation with a specific dialog. All of them had to talk about

3 game’s facets, as suggested in the scenario, thus each facet

constitutes one topic. In Table 2 we can see that participants

talked about from 4 to 9 different topics. The number of topics is

all-greater than the number of Web pages.

Table 2 also shows that P1 has a greater number of topics than

the others. It happened because during the activity, we explained

that participants should start the script directly on the selected

project page in <DOC_SYSTEM>. This is the project’s

Description facet (as seen in Figure 3 in P6’s case). P1 didn’t

follow such instructions and recorded all his interaction steps,

from logging in <DOC_SYSTEM> to coming to the Description

page, passing through additional <DOC_SYSTEM> pages

(Homepage, My Area, My Projects). Then, he created a specific

dialog3 to describe each of these steps: D2 – (<DOC_SYSTEM>)

First, go to <DOC_SYSTEM> / D3 – (Area) After that enter in

my area / D4 – (Projects) Next enter in my projects4.

Table 3 lists the topics of participants’ conversations. P1’s

additional pages are added in the “Other” column. P2 also has an

“other” topic. Like P1, he introduced <DOC_SYSTEM> in his

conversation: D2 – (1rst) First of all we have to go to the Web

site were my game is located, in this case its <DOC_SYSTEM>.

Remember that in the scenario (Frame 1), we asked participants

to include “Description” and “In Practice” (project in running

mode) in the conversations. Thus, we see in this table (see T.P.

column) that all participants talked about “Description” in at

least one dialog, but considering the second required facet, P5

did not do it. It means that his script takes Mr. T to “In Practice”

facet, but P5 did not create a dialog to talk explicitly about it.

The participants were free to choose the third facet. We see that

3 The dialogs will be reproduced like this format: Dialog# -

(Dialog’s title) Dialog’s content. We kept misspellings and

capitalization. New lines will show as “[nl]”. The dialogs will

be reproduced completely, except when clearly said (for

example by […]). We underline interesting parts of the text.

4 P1 wrote the dialogs in Portuguese. However, they will be

presented in an English translation.

four of them included “Tags” (frequency of used commands) in

their conversations, two others chose “Rules” (project

programming presented in natural language) and one participant

also included “Worksheets” (details about the game’s space). P3

was the only one that included two additional facets, besides the

ones required (“Tags” and “Worksheets”).

Table 3. Conversations topics

P1 P2 P3 P4 P5 P6 T.D. T.P.

Introduction 1 1 1 1 2 1 7 6

Description 1 2 1 3 1 3 11 6

In practice 1 2 2 1

1 7 5

Tags 1

1 1

2 5 4

Rules

3

1

4 2

Worksheets

3

3 1

Goodbye 1 1 1 1 1 1 6 6

Other 4 1

5 2

Total 9 10 9 7 5 8 48 6

Italics: facets; T.D.: Total of dialogs; T.P.: Total of participants

Table 3 also allows us to see the most popular topics. “T.D.”

column shows that “Description” inspired the creation of 11

dialogs out of 48. Moreover, participants designed at most 3

dialogs to talk about the same topic: P2, about “Rules”; P3, about

“Worksheet”; and P4 and P6, about “Description”. Focusing on

P3, the total of dialogs (in Table 3) is 9, different from the 8,

presented in Table 2. Actually, P3 created 8 dialogs, but one of

them talked about two topics (“In Practice” and “Worksheets”):

D5 – (I Hope You´ve Enjoyed) I hope you have enjoyed my

game!! […] Now im going to show you a bit about my different

agents and how many I used in each worksheet. First I´ll be

showing you the first workesheet..... It is clear that she is talking

about something that has already passed (game playing) and

anticipating what is coming next (worksheets). P5 was the only

one with two introduction dialogs. He wrote: D1 – (Greeting)

Hello Mr. T come check out my new game! / D2 – (La locura of

the dudes) The name of my game is called La locura of the

dudes. It is pretty awesome and fun.

Table 4 addresses the time spent in each phase of the activity.

“Planning” involves navigating through <DOC_SYSTEM> in

order to choose the desired facets and mainly registering them in

the notepad file. P1 was the only one that did not plan the

conversation at all. He just jotted numbers 1 to 5 in the file and

did not do anything else. He preferred to record the script

immediately. P5 spent some time planning the conversation, but

he wrote only three words in the notepad and ended up erasing

everything, letting the file empty.

Table 4. Duration (in minutes) of performed activities

P1 P2 P3 P4 P5 P6

Planning 1.13 8.02 4.57 4.17 6.23 10.57

Script recording 4.62 6.63 13.05 5.40 9.37 5.37

Dialogs creation 14.88 13.13 23.62 6.55 19.98 13.90

Dialogs execution 4.52 5.35 2.95 0.67 5.48 2.03

Form filling 8.70 7.02 6.65 5.43 4.97 5.30

Total 33.85 40.15 50.83 22.22 46.03 37.17

P6 spent more time in the first phase, because she did a very

detailed planning. She wrote in the file the actions to be recorded

and outlined the text (Frame 2). She followed the plan during the

script recording and during the dialog creation to check what the

script’s steps were about. Because of this, she was the second

faster participant to create the script.

Sokoban:

Highlight Description

Text - describing description

Highlight Instructions

Text - describe instructions

Highlight Description

Highlight Tags

Go to Tags

Text - explaining tags and how to use and what to see

Click on Move to show an example

Text - who uses move

Highlight In Practice

Text - why don´t you try?

Click on In Practice

Final Text - Goodbye!

Frame 2. P6's planning file

The dialog creation was the most time-consuming phase, because

participants had to write down all the messages they wanted to

communicate. We can relate this data with the amount of text

using a chart (Figure 4). We would like to comment two cases.

Comparing P1 and P2, we see that P1 wrote much less than P2

(10.22 x 31.40, from Table 4) and consumed slightly more time

than him (14.88 x 13.13). It happened because P1 had problems

during the dialogs creation. He closed the dialogs editor without

saving them, so he had to rewrite some dialogs. P5 also had

problems; he had to do a major change in the script when he had

already started the dialogs writing. He did not lose his work, but

he had to reorganize things and change the text in some dialogs.

Because of that, in Figure 4, P1 and P5’s words per dialog are

smaller than the time spent. The other participants showed

coherent numbers: P4 wrote fewer words and spent less time; P2,

P3 and P6 wrote more words and spent more time.

Figure 4. Minutes creating dialogs X Words per dialog

4.2.2 Language analysis As said before, the participants go to an American school, thus in

some cases, they have to deal with three languages: English at

school; Portuguese in social interactions with Brazilians; and

their first language at home. Among the participants, 4 were

English speakers and 2 were Spanish speakers (see Table 5).

Half of them chose to write in their mother tongue and the other

half wrote in a foreign language. Curiously, only P1 wrote in

Portuguese, the Spanish kids wrote in English, though they could

have also used their mother tongue.

As P1, P2 and P3 wrote in a second language, their dialogs

presented some misspelling and grammar mistakes. For example,

P1 wrote: “Entre em o (no) terceiro discription link (link

‘description’) e leie (leia) o que apareçe (aparece).” P2’s

misspelled words: ithought (without), geting (getting),

immideatly (immediately), decepticons (depictions). Some of

P3’s misspelled words: elaborted (elaborated), rech (reach),

commans (commands), workesheet (worksheet). Certainly,

mistyping could have been the reason for those mistakes, but

curiously the other participants, who wrote in their mother

tongue, did not present any mistakes of this kind.

Table 5 - Participants' language

P1 P2 P3 P4 P5 P6

Mother tongue E S S E E E

Used language P E E E E E

E – English; S – Spanish; P - Portuguese

When P1 was writing his text, the influence of his mother tongue

emerged. When putting a name in the script he first wrote it in

English and then he erased it and wrote in Portuguese. The same

happened with the title of the first dialogs. In one dialog, he put

the title in English and left it so. Besides, at some point he asked

the observer how to say the word “check” in Portuguese. P3 also

showed a little language lapse: she wrote “bery” and then

corrected it to “very” (“v” sounds like “b” in Spanish). With

these facts, we tend to think that participants who wrote in their

first language expressed themselves “better”. However, as we

will see later, conversations varied in size and quality. For

example, P2, P3 and P6 created the wordiest dialogs, while P1,

P4 and P5 were more succinct.

4.2.3 Dialog Analysis As is usually the case in qualitative studies, in spite of the small

number of participants we were able to see a variety of intentions

and styles of communication. The following subsections address

the main types of information identified in the dialogs.

4.2.3.1 Presentation of “self” This category refers to cases where participants explicitly talked

about themselves, their ideas and their feelings. A first case of

this category is when they put their names somewhere in the

dialogs, identifying their authorship. P1 put his name in the first

version of the welcome dialog: Mr. T, this is your student P1. I

created this dialog to show you my sokoban. However, as said

before, he had to rewrite some dialogs after closing the editor

window without saving and the welcome dialog ended up like

this: D1 – (Start) Hi Mr. T I created this dialog to you can see

my sokoban. P2 also identified himself in the welcome dialog:

Hello Mr T, this is P2, and for this topic […]. The other two

participants identified themselves in the closing dialog, when

they said goodbye. P3’s last dialog: D8 – (BYE :() I guess this is

bye :( i hope you enjoyed my game and the side talk presentation.

[…] Thank you, P3. P4’s last dialog: D7 – (Bye) Hope you

enjoyed playing my game. [nl][nl] Thanks, [nl] P4. Besides the

identification inside the dialogs, P3 and P6 put their names in the

title of the conversation.

Participants also tried to make their feelings explicit. We

identified three strategies to do this: 1) emoticons; 2) repeated

letters; 3) capital letters. P2 used a smile emoticon when

challenging Mr. T to play his game: […] Good Luck, you will

need it. =). P3 adopted the three strategies to communicate with

Mr. T in a more natural way. She “smiled” to him in the dialog

about worksheets: MOVING ONTO THE SECOND

WORKSHEET:), but she sounded sad in the closing dialog: D8 –

(BYE :() I guess this is bye :( […]. Addressing the second

strategy, P3 did it in many situations: D1 – (Hiiiiii) Hi Mr T […]

/ D3 – (Tagssssss) If you scroll down […] / D4 (Play Time) […]

Use those instructions i shared with you at the beginning of my

presentation and enjoyyyy. The second level is pretty hard so you

are going to have to try your very best. GOOD LUCK. The use

of capital letters has already been noticed in P3’s above

mentioned dialogs. All strategies established a closer

communication with Mr. T. Also, participants were engaged in

the proposed activity, creating a “real” communication with Mr.

T, even knowing that this was only a test.

Another way to present selves was the greatly (and expected) use

of the first person in discourse. Some dialogs above showed first

person pronouns. We selected two of P2’s dialogs that illustrate

the use of first person: D1: (Frogger) Hello Mr T, this is P2, and

for this topic, i chose my game frogger, as with this one, i didnt

use any help, i did it all myself, and i am proud to present it.

Also, i found many of my own glitches and found a way to solve

them. / D5: (4th) If you want more information about my rules

and how I made the agents move the way they do, feel free to

click here. Another example is this sentence from one of P6’s

dialogs: “The description is the most fun to write, in my opinion,

because you can be as creative as you want with it.”

The extensive use of first person words is a strong evidence of

the participants’ need of talking about a personal topic and make

it clear to the interlocutor. Figure 5 shows the frequency of words

used in the dialogs. The biggest word (you) reflects the

communication aspect of this study. “You” is the constant

interlocutor mentioned during the SideTalk conversations. The

second biggest word (game) makes clear the main topic off all

conversations. However, we would like to highlight here the

great size of the words “I” and “my” (third place in size),

emphasizing the importance of first person in the discourse.

Figure 5. Words cloud

4.2.3.2 Presentation of the project All participants used the dialogs to present their projects, directly

saying to Mr. T was the main topic of their conversations. For

example, see P1 and P6 talking about their “Sokoban” game,

respectively: D1 – (Start) Hi Mr. T. I created this dialog to you

can see my sokoban / D1: (Hi!) Hi there Mr. T! [nl] Have you

seen my new Sokoban game? I think it´s great! […] We recall the

already mentioned P5’s second dialog that presents his project as

well: D2 – (La locura of the dudes) The name of my game is

called La locura of the dudes. It is pretty awesome and fun.

Participants talked about their projects not only mentioning their

names. They provided additional information about the project

that had been omitted in <DOC_SYSTEM>. P2 did that in two

dialogs: D3 – (2nd) Over here, you can clearly infer it is about a

description of the game, and this is frogger. you have to get to

the checkpoint ithought falling into the water or geting run over

by a truck. Compare this text with the real description he

provided in <DOC_SYSTEM>: This is a frogger adaption made

by me, everything is original, as i drew and gave directions to all

of the agents. About the instructions, he wrote: D4 – (3rd) Over

here, in instruction, I state how to move and what not to do

(cheat) if you do so, you will find out immideatly, as the game

will reset and you will have to restart the level. The original

instructions on <DOC_SYSTEM> were: Use the arrows to move

the frog and get past all of the obstacles, remember, if you cheat,

you shall die. Cars and trucks shall maul you and kill you

fiercly, you cant swim. remember that. In this case, the text in

<DOC_SYSTEM> was detailed enough; P2 only reinforced the

importance of not cheating.

P3 clearly said she was taking advantage of the dialogs to

improve the original description: D2 – (Description) The

instructions arent very elaborted so I would like to explain it to

you better here. [nl] To win you have to reach the flag placed on

the other side of the road and rivers and position yourself right

on top of the flag. [nl] To move, you use the arrow keys, to rech

the other side without getting killed you have to dodge the trucks

and hop on the turtles and wood logs. Now, compare it to the

original text: (Description) Traditional Frogger with an exciting

turn! :) / (Instructions) Use the arrows to move! [nl] Be careful

not to sink in the water. [nl] Don't get ran over by the

trucks/logs and turtles! [nl] ENJOY.

Finally, P6 wrote in one of her dialogs: D7 – (Try it yourself)

Why don´t you give this game a try? […] Keep an eye out for the

bottom of the worksheet - I´ve added a step counter to make the

game a little more exciting. Try to keep it under 40! Trust me,

that´s harder than it sounds.

By observing how the participants supplied additional details

about their games, we see that in the end Mr.T will have two

sources of information: one uploaded in <DOC_SYSTEM> and

other updated in the dialogs. Both are crucial parts of the whole

message communicated by participants.

4.2.3.3 Presentation of the interaction This category has to do with the cases where participants guided

Mr. T through <DOC_SYSTEM> or clearly invited him to

explore it. P1’s dialogs were in essence this kind of guide: D2 –

(<DOC_SYSTEM>) First, go to <DOC_SYSTEM>. / D3 –

(Area) After that enter in my area. / D4 – (Projects) Next enter in

my projects. / D5 – (button) Then click on explore. P1’s texts

were almost a translation of the script steps, emphasizing the

sequence of the conversation. Other strong aspect of this guiding

style is the use of imperative form of verbs, noticed in P1’s text.

Other participants used a guiding style as well. P3’s dialog about

tags includes the following sentence: If you scroll down you are

going to be able to see a fancy diagram with a couple of words.

In other dialog, she wrote: D7 – (Second Worksheet) MOVING

ONTO THE SECOND WORKSHEET:). The most of P4’s dialogs

started with “here”, followed by a description about what is

happening: D3 – (Instructions) Here are the instructions on how

to play the game. / D5 – (Commands) Here you can see the

commands each agent has. In all these cases participants help

Mr. T to know what is happening behind the dialogs.

Participants’ invitations to explore <DOC_SYSTEM> can be

seen in some cases. P1 wrote: D8 – (Tags) After that click on

tags and if you want you can see which of my agents used which

commands. P2 let Mr.T free to interact: D5 – (4th) If you want

more information about my rules […], feel free to click here. In

the first dialog, P5 invited him: D1 – (Greeting) Hello Mr. T

come check out my new game! Finally, P6 asked: D7 – (Try it

yourself) Why don´t you give this game a try? […].

4.2.3.4 Presentation of the pages Most participants introduced pages or pages’ details before they

were actually loaded. All P1’s dialogs reproduced above

followed this pattern. While reading the dialog, Mr. T sees the

link that goes to the referred page. In one of P3’s dialogs, she

says: Now im going to show you a bit about my different agents

and how many I used in each worksheet. First I´ll be showing

you the first workesheet..... At this point, the browser shows the

“Tags” page but the link to the “Worksheet” facet is highlighted,

as seen in Figure 6. In other words, the dialog talks about

something that will appear after the user clicks on “Continue”.

Figure 6. Going to "Worksheets" facet

In other hand, some participants preferred to establish a “real-

time” communication. It means that dialogs talk about something

currently displayed in the browser. Figure 3 is a clear example of

this strategy. P6 is talking about the instruction on the

Description page and it is highlighted synchronously. P2 used

“here” and “over here” in the dialogs that were pointing up to

present content: D3 – (2nd) Over here, you can clearly infer it is

about a description of the game,[…] / D4 (3rd) Over here, in

instruction, I state how to move and what not to do […] / D7 –

(6th) Here you can see the rules and how I edited the

decepticons. P4 followed the same pattern: D2 – (My Game)

Here is a sample of what the game will look like. / D3 –

(Instructions) Here are the instructions on how to play the game.

In all cases the piece of page mentioned in the dialogs was in

green, calling the Mr T’s attention to the target of the dialogs.

4.2.3.5 Presentation of further information There are a lot of examples showing how the participants

interpreted the facets’ meanings. One of P2’s dialogs defined

“Rules”, like this: D6 – (5th) This link will take you to the place

where you can find my rules and all of my agents. P3 defined the

“Tags” facet: D3 – (Tagssssss) If you scroll down you are going

to be able to see a fancy diagram with a couple of words, this

words are the commands I used to create my game, the bigger

the command is, the more times ive used it. […]. P6’s dialog

(Figure 3) explained exactly what instructions are about.

In some cases participants brought the broader context into the

dialog, making the conversation go beyond <DOC_SYSTEM>.

We highlight three different pieces of information about

description, presented in P3’s, P4’s and P6’s dialogs,

respectively: The instructions arent very elaborted so I would

like to explain it to you better here. / Here is a description on

what the game is about. I did not put one because I forgot. / The

description is the most fun to write, in my opinion, because you

can be as creative as you want with it. An interesting point is

that P4 said she forgot to write the description in

<DOC_SYSTEM>, but she did not appropriated of the dialog to

correct her mistake, unlike P3. P2 justified the presence of

undesired agents in his game: D7 – (6th) Here you can see the

rules and how I edited the decepticons5. Sometimes, I accidently

pressed the new agent button instead of the new decepticon

button , so there are agents that arn´t used in the game or have

any rules. Only an interlocutor as Mr.T is able to understand

P2’s explanation. P3 called Mr.T’s attention to the expressive

numbers of her game: D3 – (Tagssssss) […]As you can see I´ve

used 16 commans although this is a pretty simple game. / D6

(WOW) WOW !!!! CHECK OUT HOW MANY AGENTS I USED

ONT THIS WORKSHEET! WELL, WHATCH OUT BECAUSE

ON THE NEXT ONE I USED MORE. P6 had the same concern:

D5 – (Tags) This link shows you all the commands used for the

game. In my game, there weren´t that many commands used, but

in others, there are lots. She means that she used few commands

but she is able to use more if she wants. These last three cases

show how the participants were proud of their achievements in

designing the games and how much they wished Mr.T to know it.

Besides talking about the facets and the context, most

participants gave some technical details in order to help Mr.T’s

interaction. They used interaction element and actions names in

the text to give directions. For example, P1’s dialogs: D5 –

(button) Then click on explore / D6 – (Discription link) Enter in

the second description link and read what appears. P2 was great

giving this “technical support”: D8 – (7th) Over here, if you were

to click, it would open up another Web site with my game

(frogger). / D9 – (8th) Java will ask you if you want to execute

the program, click on the box that says that you will accept the

risks, the on the execute button and wait for it to load […].

These dialogs referred to the new window opened loading a Java

applet with the game. To play the game, one needs to access this

applet. P3’s third dialog has this sentence: D3 – (Tagssssss)

[…]As you can see I´ve used 16 commans although this is a

pretty simple game. ( This is highlighted in a greenish colour on

your right). P5 followed this pattern in one dialog: D4 – (Just in

case) The next link will simply show you what rules I used to

make the game. Finally, P6 also gave some technical tips: D2 –

(Description Home) This is the main page of Descriptions. On

here, you can see the picture of my game and other details about

it. / D5 – (Tags) This link shows you all the commands […].

4.2.4 Discourse styles By analyzing each participants’ conversation as a whole, we can

know their styles in discourse. P1, P4 and P5 were more

succinct. P1 was the most impersonal, mainly giving Mr. T direct

instructions. Like P1’s, P4’s dialogs were also guiding Mr. T,

focusing the conversation on the pages themselves instead of

dialog content. P4 said only the necessary, but using a more

informal language. P5, even with short dialogs, was a little bit

more informal, highlighting the fun of the game.

P2, P3 and P6 created more elaborate conversations. All of them

established some intimacy with the interlocutor. P2 was proud to

present his game; P3 was very excited and effusive; and P6 asked

questions like she was really talking to Mr. T: Have you seen my

new Sokoban game? / What´s going on? / Why don´t you give

this game a try? / Did you help Wally, just like I asked? She also

made strong connections between the dialogs themselves. One of

them is this question above about Wally. Another case is: in the

5 He meant “depiction”, which are the visual appearances of

agents. One agent can have many depictions.

dialog about instructions she warned: Don´t forget this stuff - if

you want to play, you´ll need it!; and in the dialog about the

game she checked: I hope you still remember the instructions

from the beginning... You do, don´t you? A last remark about

P6’s discourse is seen in the last dialog: […] I hope to have a

new game for you sometime - or maybe you could make one to

show to me! Firstly, she seems to extend the conversation – it

does not finish here, she wants to keep talking – and the other

aspect is that she treats the teacher as a peer: he assigned tasks

(making a game) to her and now she assigns tasks to him. She

does it in a relaxed way, hoping to get a response from him.

4.2.5 Questionnaire answers In the after-test questionnaire, the participants answered six

open-ended questions about the experience of creating the

SideTalk conversation to Mr. T (see Table 6).

Table 6. After-test questionnaire

1 What did you think about this experience?

2 Which game did you choose? Why?

3 Which <DOC_SYSTEM>’ pages did you choose to include in the

conversation? Why?

4 What do you think about the sequence of pages you chose?

5 How did you feel when creating this conversation with Mr. T? Why?

6 What do you want Mr. T to think of your project, after watching your

presentation?

In the first question, they gave their general opinion about the

experience. We highlight the answers addressing how they saw

SideTalk as a good communication tool: “I thought it was a

really good way to explain your game and the making while it’s

very easy for the person on the other side to understand.” (P3);

“I liked how you can talk to your teacher and show the game to

him/her [...].” (P4); “I like how it is an easy way to

communicate, you don't have to click anything, the computer just

does it for you!” (P5); “[…]. I think it would be cool if all the

other kids could do this as well, though.” (P6).

Second question was unanimous. All of the participants chose

their best or funniest game to present in the SideTalk

conversation. Besides, some participants provided additional

justification: “I chose the game Frogger because I am especially

proud of it. I did it myself without the help of the Wiki resource. I

did all of the agents and figured out how to do everything.” (P2);

“I chose Frogger because I thought it was one of my best games,

and it was the one with the best and hardest second level.” (P3)

Third and fourth questions addressed the choice of

<DOC_SYSTEM>’ pages. They made the adopted

communication strategy explicit. P2’s answers were very

revealing: “I chose to include the Rules, because sometimes

teachers want to see how you did everything, so to see you didn’t

cheat, so I thought that if Mr. T was on a trip, this would be an

adequate page he should see.” / “I chose to start with description

and at the end practice, because I want Mr. T to have end with a

good impression and the game would be the way to do it. I did it

in an essay way. 1rst page= medium interesting, 2nd page= the

worst out of the 3, and the last is the most interesting.” P4 made

clear that her goal was helping the interlocutor to play her game:

“I chose in practice, commands, description, and instruction

because they are what you need to know to play the game.” / “I

started the pages the way I did because I think the order is good,

it tells you how to play and what the game is about then lets you

play.” P6, on the other hand, considered important to show how

the game was made and not only how to play it: “I chose to use

the Tags page as my third page because I think it’s neat how you

can see exactly who uses which command.” / “I chose to do

Description first because that´s the page you open to. I decided

to put In Practice last because that way they would understand

what they were playing before they played it. […]”

As for the fifth question, in general, participants said they felt

comfortable in the activity and that it was easy. P6 explained: “It

was really easy for me because I really like writing.” P4

explained why she felt comfortable: “[…] because I was just

showing him my game.” It means that she masters this topic,

since it was her own expression – her game.

In the last question, we would like to know what they want to

cause to Mr. T. Most participants tried to show their effort,

making clear that they wanted Mr. T fell proud of them and have

a good impression of their games: “I want him to think good of

my project because I put a lot of effort and hard work into it.”

(P2) / “I want him to think I tried my best and I want him to like

it.” (P4) / “I think he will love my project, most people say it is

really fun.” (P5) / “I want Mr. Tim to be proud of me and I hope

he sees hope in me. Wow, that was cheesy...” (P6)

5. CONCLUSIONS We have described a case study on self-expression revealed

during EUD activities performed with SideTalk. During the tests,

we did not ask participants to “express themselves” – they only

had to communicate with Mr. T. However, we could observe how

they explicitly expressed themselves in their conversations. This

happened because people express themselves in everything they

do, intentionally or not [12]. From all evidences reported in the

previous section, we saw them expressing themselves and also

how they did it. In general, participants were telling a story in

their own games. The games themselves are stories with

characters, actions, space and time. Through those narratives

they showed their “individual voices”, because people often

express their individuality while narrating personal stories [12].

As we saw, the six participants produced remarkable different

narratives, projecting different selves in discourse, using the

resources available. [12].

Although this discussion of self-expression in participants’

narratives is completely based in oral or written stories, we

suggest that it can be extended to the computational and interface

linguistics involved in the process of designing SideTalk

conversations (as EUD products). Then, we believe that SideTalk

may be a privileged space to investigate communication

phenomena like self-expression.

SideTalk is one of several available tools that follow the

“computer as media” view. As pointed in the related work

section, CSCW studies have been revealed the importance of

communication process mediated by technology. In the specific

study described here, SideTalk was used in a collaborative

context (communication between students and teacher). Although

the participants are children, they work in group – with

classmates and teacher, in the classroom or through SideTalk.

Hence, the research described here fits in the field of Computer-

Supported Collaborative Learning – CSCL. Our research is a

first step in the direction of investigating the manifestation of

subjectivities in technological processes that are the result of

programming activities done by end-users. We believe that it

would allow CSCW and CSCL researchers address questions

that so far are hiding or under explored, due to the lack of

directly related research. For example, CSCL researchers might

be interested in investigating how to improve communication

between students and teachers by examining the self-expression

of interlocutors, mainly through technologies. Likewise, CSCW

scholars have the opportunity to explore in depth how workers

express themselves while acting as end-user developers; how the

decisions about design and programming of EUD systems impact

on their collaborative activities; how they communicate while

negotiating group decisions about technology (or otherwise

affected by it); how different ways of self-expression are

consciously or unconsciously used to persuade others; and most

importantly how CMC resources in group technology facilitate or

not this sort of communication.

As future work to be done soon we have two more studies

planned. The first one is the investigation on the reception of the

SideTalk messages. We will show to Mr. T all 6 conversations

created by his students. We are interested in seeing how he

identifies the self-expressions of the authors. As he knows the

students well he is able to identify their styles, personalities and

idiosyncrasies behind the communication received. The second

study will be similar to this one described here. This time we

will invite students from another Brazilian particular school, to

create SideTalk conversations about their projects in

<DOC_SYSTEM>, having as interlocutor their teacher, like we

did with the first group. In this case, all participants will be

Brazilian students, Portuguese speakers. Thus we will be able to

contrast the results from both sources of investigation.

6. ACKNOWLEDGMENTS <Omitted for blind review>

7. REFERENCES [1] Alves, A. S. et al. (2013) Using Mediating Metacom-

munication to Improve Accessibility to Deaf in Corporate

Information Systems on the Web. In: 7th International

Conference, UAHCI 2013, v. 8010.

[2] Andersen, P. B., & Holmqvist, B. (Eds.). (1993). The

computer as medium. Cambridge University Press.

[3] Argamon, S. et al (2007). Mining the Blogosphere: Age,

gender and the varieties of self-expression. First Monday,

12(9).

[4] Arhippainen, L. et al (2012). Designing 3D virtual music

club spaces by utilizing mixed UX methods: from sketches

to self-expression method. In Proceeding of the 16th

International Academic MindTrek Conference. ACM.

[5] Cowan, L. G. (2010) Supporting self-expression for

informal communication. Proc. of the 12th ACM

international conference adjunct papers on Ubiquitous

computing. ACM.

[6] DeAndrea, D. C. et al (2010). Online language: The role of

culture in self-expression and self-construal on Facebook.

Journal of Language and Social Psychology, 29(4).

[7] de Souza, C. S. (2005). The semiotic engineering of

human-computer interaction. The MIT press.

[8] de Souza, C. S. ; Preece, Jenny . (2004) A Framework for

Analyzing and Understanding Online Communities.

Interacting with Computers, Amsterdam, v. 16, n.3.

[9] DiMicco, J. M.; Millen, D. R. (2007) Identity management:

multiple presentations of self in Facebook. In Proceedings

of the 2007 international ACM conference on Supporting

group work (GROUP '07). ACM, New York, NY, USA,.

[10] Gürsoy, B. (2013). The Expression of Self-Identity and the

Internet. Journal of Educational and Social Research, 3(7).

[11] Hwang, L. H. et al. (2010) Promoting oneself on Flickr:

users' strategies and attitudes. In Proceedings of the 16th

ACM international conference on Supporting group work

(GROUP '10). ACM, New York, NY, USA.

[12] Johnstone, B. (1996) The linguistic individual: self-

expression in language and linguistics. New York: Oxford

University Press.

[13] Ko, A. et al. (2011). The state of the art in end-user

software engineering. ACM Computing Surveys (CSUR),

43(3).

[14] Leshed, G. et al. (2008) CoScripter: automating & sharing

how-to knowledge in the enterprise. Proc. of the 26th

SIGCHI conference on Human factors in computing

systems. CHI’2008. New York, NY: ACM.

[15] Lieberman, H., Paternò, F., & Wulf, V. (Eds.). (2006). End

user development (Vol. 9). Springer.

[16] Moere, A. V., & Hoinkis, M. (2006, November). A

wearable folding display for self-expression. In

Proceedings of the 18th Australia conference on

Computer-Human Interaction: Design: Activities, Artefacts

and Environments. ACM.

[17] <Omitted for blind review>

[18] <Omitted for blind review>

[19] <Omitted for blind review>

[20] <Omitted for blind review>

[21] Sung, Y. et al (2011). Actual self vs. avatar self: the effect

of online social situation on self-expression. Journal For

Virtual Worlds Research, 4(1).

[22] Turkle, S. (2005). The second self. 20th anniversary ed.

Cambridge, MA: The MIT Press.

[23] Turner, K. H. (2011). Digitalk: Community, convention,

and self-expression. National Society for the Study of.

[24] Walsh, C. (2010) System-based literacy practices: Digital

games research, gameplay and design. Australian Journal

of Language and Literacy, 2010, 33, 24-4

SideTalk: comunicação interpessoal na/sobre a Web

Primeiro autor Afiliação Endereço E-mail Telefone

Segundo autor Afiliação Endereço E-mail Telefone

Terceiro autor Afiliação Endereço E-mail Telefone

ABSTRACT The internet has drastically changed the way people communicate, interact with each other and participate in society. When ordinary users have the chance to design and program their own discourse, the reach and power of their participation is increased. SideTalk is an interpersonal communication tool that involves and requires end-user development (EUD) activities. We propose to demonstrate how SideTalk works and how its EUD capabilities open the avenue for new kinds of investigation in Human-Computer Interaction.

Keywords End-user development, computer-mediated communication, social participation. ACM Classification Keywords H.4.3 Information Systems Applications: Communications Applications; H.5.2. Information interfaces and presentation (e.g., HCI): User Interfaces

INTRODUÇÃO Entre os vários impactos trazidos pela alta conectividade dos tempos atuais na vida das pessoas, estão as possibilidades de comunicação e participação sociais promovidas pela rede mundial de computadores. Áreas como Interação Humano-Computador (IHC), Sistemas Colaborativos (CSCW) e Comunicação Mediada por Computador (CMC) estão atentas às mudanças sociais envolvidas neste novo processo de interação social. As formas mais frequentes e visíveis de comunicação e participação social através da internet são as que envolvem o uso de redes sociais e demais sistemas de comunicação interpessoal, tais como e-mail, fóruns, bate-papos, blogs e páginas/websites pessoais. Vê-se, portanto, que pessoas “comuns” estão criando conteúdos e publicando-os em larga escala, na direção contrária do modelo midiático que marcou boa parte do século XX, quando empresas difusoras de informação transmitiam seu conteúdo

unidirecionalmente através de rádio, televisão, jornais e revistas. Entretanto, a maciça maioria dos internautas atua como “usuários comuns”, ou seja, limita-se a usar uma ferramenta que lhes é oferecida e tirar proveito dela dentro dos limites e da proposta de comunicação e participação social que ela intrinsecamente tem. Existe, porém, uma forma de dar aos usuários ainda mais liberdade de expressão e poder de atuação: dar-lhe algum poder de programar o seu meio, modo, conteúdo e forma de comunicação. Quando se torna autor de programas próprios, ele começa a poder embutir neles suas próprias propostas de interação e atuação social, tendo a possibilidade de exercer mais livre e amplamente a sua criatividade. É nesta linha que estamos desenvolvendo o SideTalk, uma ferramenta de comunicação diferente daquelas comumente difundidas em CMC, pois envolve atividades de end-user development (EUD)1. O usuário do SideTalk deve construir uma legítima peça de software (ou programa) para comunicar sua mensagem aos interlocutores visados. O SideTalk permite um tipo especial de comunicação, que toma por base roteiros de navegação ou interação em páginas da internet, conforme descrito a seguir. SIDETALK Tecnicamente, o SideTalk [3, 4] é uma extensão para o Firefox desenvolvida a partir do gravador de macros CoScripter [2]. A comunicação através dele começa com o emissor da mensagem selecionando uma ou mais páginas que deseja incorporar a sua comunicação e gravando um script (roteiro) com uma sequência de passos de interação (que tipicamente envolve acesso a URLs, clique em links e botões, preenchimento de campos, seleções em listas, etc.). Em seguida, ele cria diálogos de mediação (que podem explicar, comentar, traduzir, ilustrar ou desempenhar qualquer outra função comunicativa que ocorra em diálogo interpessoal) para as etapas da navegação que achar necessário. O conjunto formado pelas páginas selecionadas, os passos do script gravados e os diálogos de mediação constitui uma conversa do SideTalk.

1A tradução ‘desenvolvimento por usuários finais’ não está estabilizada no vocabulário técnico da área em português.

Figura 1. Conversa sobre restaurantes no SideTalk

Do outro lado da conversa, o receptor realiza a navegação programada pelo emissor, sendo guiado pelos diálogos de mediação referentes a cada passo. Os diálogos são exibidos numa barra lateral do navegador, de forma que este usuário tenha acesso simultâneo ao conteúdo da área principal e ao conteúdo adicionado pelo emissor. A apresenta um diálogo de uma conversa entre dois amigos criada com o SideTalk. O emissor (criador dos diálogos) deseja falar sobre lugares interessantes para o receptor visitar quando chegar ao Rio de Janeiro.

Para criar esta conversa, o emissor realiza atividades de EUD. A interface de programação do SideTalk é uma mistura de programação por demonstração (o principal estilo adotado pelo CoScripter), programação paramétrica (por exemplo, selecionando valores pré-determinados em uma lista ou informando novos valores em caixas de texto) e roteirização textual (por exemplo, escrevendo código em HTML para o conteúdo do diálogo ou editando os passos codificados com a linguagem de script do CoScripter).

PESQUISA O desenvolvimento do SideTalk sempre se pautou em resultados observados a partir de estudos empíricos, realizados tanto com usuários emissores quanto receptores, em cenários isolados (observações realizadas com um ou outro grupo) ou em conjunto (observações com interlocutores de uma mesma conversa). Entre a variedade de contextos observados durante a pesquisa com o SideTalk, destacamos: acessibilidade [3], comunicação professor-aluno [4], avaliação de alternativas de design e investigação de autoexpressão [4]. O SideTalk cria uma relação especial entre emissores e receptores. Além de poder beneficiar as áreas de acessibilidade, ensino, design e avaliação de IHC, o SideTalk tem o potencial de avançar a pesquisa em IHC,

em particular a Engenharia Semiótica. Esta teoria coloca em relevo o papel do designer no processo de construção de significados apresentados em um sistema interativo, comunicando-se com o usuário através da interface no momento da interação. Assim, o sistema em si atua como um representante do designer quando este não está presente. O design de interação é justamente o resultado de escolha de conteúdos, roteiros e formas de comunicação que o usuário poderá ter com este representante do designer. Por isto, além de ser uma ferramenta de comunicação em si, o SideTalk tem se mostrado uma promissora ferramenta de pesquisa, pois: põe em evidência o usuário que atua como designer envolvido na engenharia semiótica da conversa (que é um artefato computacional, um programa executável); e destaca o aspecto metacomunicativo dos diálogos, já que eles estão sempre se referindo a uma comunicação pré-existente (os sites integrantes da conversa). Atualmente, está em desenvolvimento o protótipo de uma nova versão do SideTalk para auxiliar o aprendizado e a crítica de diagramas MoLIC [5].

REFERÊNCIAS 1. de Souza, C. S. The semiotic engineering of human-

computer interaction. The MIT press (2005).

2. Leshed, G. et al. CoScripter: automating & sharing how-to knowledge in the enterprise. In Proc. CHI 2008, ACM Press (2008).

3. <Omitted for blind review>

4. <Omitted for blind review>

5. Barbosa, S.D.J. e da Silva, B.S. (2010) Interação Humano-Computador. Rio de Janeiro, Elsevier-Campus