Design Idea: Chinese Character Helpr

Project Leaders: Xianquan Liu & Justin Olmanson
Status: In Development, Accepting Collaborators
Funding Via: UCARE

Have you ever tried to learn Chinese? If you have you know just how hard it is to learn to pronounce the different tones in spoken Chinese. You’ll also know that learning to write in Chinese is part memorization, part detective work, and part repetition.

This design idea proposes a way to help Western learners of Chinese express themselves in characters using what they know about Chinese pronunciation as well as audio pronunciations and images. Here’s how it might work:

First students type in the word they want to write based on how it sounds.


Next, if they don’t recognize any of the characters as being the one they want they can listen to how each one is pronounced.
If they still don’t know which is the character they want they can access images for each one.
When they find the character they need they click or tap on it and it gets inserted into their text.



Here’s a more academic explanation below:

Despite the wide use and availability of multimodal print-based and new media-based supports for Chinese character acquisition, few applications exist that allow students to leverage their nascent but very real Chinese language knowledge for character production. Western learners of Chinese struggle with producing and recognizing characters despite having aural and pinyin-related knowledge of many Chinese words and phrases. This difficulty comes in part due to a lack of explicit, easily discernible sound-symbol mappings between spoken words and written characters. Our design is a techno-pedagogical pivot away from rote memorization and toward authentic contextualized character production, noticing, and multimodal support for written expression in Chinese.

Chinese Character Helpr is being iteratively designed to be an extended version of an open source operating system[OS]-level Chinese input method. The design affords Western learners an opportunity to leverage their listening and speaking language knowledge in the production of Chinese texts. This approach extends the current input method native Chinese speakers use when writing in Chinese via digital devices.  Students begin by typing the pinyin version of the word or idea they wish to express, a series of options appear based on how many words use the same letters. Each vowel in Chinese can have one of four different tones denoted by an accent mark above it. This means that wo written in Figure 1 below can refer to four different pronunciation variations. Additionally, some words in Chinese share the same pronunciation but have different meanings and characters–thus unaccented pinyin can have 1 to more than 6 possible meanings and potential character matches.

Figure 1. The process of producing Chinese characters within digital mediums uses pinyin and visual identification of intended characters.

Advanced Chinese learners and native speakers who already know the characters visually scan the numbered options created by the input method to identify the intended character or continue typing in pinyin until their character appears as an option. Once they identify it they can add it to their document by tabbing through the options and selecting it.

For Chinese learners who are not yet able to visually differentiate and identify the correct character, the standard input method described in the previous paragraph and shown in Figure 1 is not helpful. Our design extends the typical input method in three ways and via two different modalities. First, after displaying the possible characters as shown in Figure 1 for two seconds learners can access audio pronunciations of each of the possible characters by tabbing to it or mouse hovering over it (Figure 2).

Figure 2. After a two-second delay, students are supported via audio pronunciations of any character option they consider.


The use of this type of aural support connects with our interest in allowing the learner to make use of what they know in building new knowledge and understandings. After the audio pronunciations have been accessible for three seconds learners can view several images representative of each of the character options (Figure, 3). The rationale for the delay and sequencing of aural and visual supports relates to our interest in ensuring that learners focus on expressing themselves via characters as independently as possible. By building in delays, we aim to create gradually more supportive language development zones that are only accessed when needed (Vygotsky, 1978). In this way we guard against the images or audio becoming a crutch that impedes character acquisition yet still provide multimodal support in a timely way compared with external character lookup.

Figure 3. After a three second delay, students are supported via several images corresponding to the meaning of the character.

By embedding the tool at the OS level students can use it with in most web-based and word processing applications. More importantly, we seek to encourage meaningful, authentic, contextual interaction and expression with characters that go beyond character flashcards or repetitive character practice.

In these ways, the design of Chinese Character Helper combines existing technologies in a deeply pedagogical way that supports expression and language development via multimodality at the level most appropriate for the learner.

Figure 4. The resultant scaffolded selection of a character in inserted into the text.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s