Episode 270: Alyssa Hillary Zisk and Lily Konyn: Gestalt Language Processing and AAC

This week, we present Rachel’s interview with Alyssa Hillary Zisk and Lily Konyn, two autistic part-time AAC users who are members of the AAC Research Team at AssistiveWare. Alyssa and Lily discuss Gestalt Language Processing, including research into using immediate and delayed echolalia for communicative purposes and why research suggests someone being a “gestalt language processor” may be more of a spectrum and less binary. They also share about things that make modeling less effective, including talking while modeling, doing “+1 modeling”, and doing “key word” modeling. 

 

Before the interview, Chris does a banter with Rachel - in a car, in person! They talk about a co-worker of Chris who recently did a successful AAC awareness training for a Kindergarten class! Chris shares some of the positive feedback, and encourages educators to try and emulate the idea in their own school!

 

Key ideas this week:

 

🔑 Programming in phrases that we think might be helpful is not “adding a gestalt” to the device, it’s just adding a useful phrase. Gestalts have an established emotional connection to the person who is learning language; a phrase doesn’t become a gestalt just because a therapist or parent thinks it would be useful.

 

🔑 Alyssa says that there is not a lot of research specifically referencing Gestalt Language Processing, but there is relevant research about delayed and immediate echolalia being used for communication purposes. Alyssa also mentions first hand accounts from autistic people who first used echolalia to communicate, as well as “remixed echolalia”, i.e., taking a script and moving or changing a part of it, which is very is similar to the idea of “mitigated gestalts” in gestalt language processing.

 

🔑 Alyssa and Lily are not supporters of “+1 modeling”, where the communication partner models one word longer than the AAC user uses. Alyssa and Lily think this may cause an AAC user to become stuck only using one button because they only see two word utterances modeled. We model full sentences to children, not just sentences one word longer than they are saying.

 

🔑 Similarly, Alyssa and Lily suggest that communication partners should refrain from saying words as they they are inputting them into the AAC device - it can create competing auditory channels, which is difficult for anyone with auditory processing challenges.

 

🔑 Alyssa and Lily are also not supporters of “key word modeling”, where the communication partner models one or two key words as they are talking. One word is faster to model than the entire sentence, but when the AAC user tries to communicate themselves, they are going to find out it is a lot more difficult than pressing one or two words. Alyssa and Lily believe this could cause the AAC user to stop trusting their communication partner or decide that they are inherently bad at AAC.

 

Links from this week's episode:

 

AAC for Speaking Autistic Adults by Alyssa Hillary Zisk: https://www.liebertpub.com/doi/abs/10.1089/AUT.2018.0007

 

How to Talk about AAC Users (According to Them) by Alyssa Hillary Zisk and Lily Konyn: https://www.assistiveware.com/blog/how-to-talk-about-aac

 

Previous
Previous

Episode 271: Samantha Hagness & Becky Woolley (Part 1) - Modeling AAC in the Classroom Using Grid 3

Next
Next

Episode 269: Darla Ashton: What Have We Learned About AAC in the Last 10 Years?