Recent good news

We’re happy to share some recent good news from across the Crow team.

Nina Conrad

Congratulations to Crow researcher and University of Arizona doctoral candidate Nina Conrad, who was awarded a Bilinski Fellowship for her dissertation project, “Literacy brokering among students in higher education.” The fellowship will fund three semesters of writing and includes professional development opportunities as well. 

Crow researcher Hannah Gill was admitted to the Mandel School of Applied Social Sciences at Case Western Reserve University, including a scholarship and funding to support her field work. In May, Hannah will graduate from the University of Arizona, with a double major in English and Philosophy, Politics, Law, and Economics (PPLW).

Thank you to everyone who attended our third Crow Workshop Series event, focusing on grant writing. If you were not able to attend, please see the video on our YouTube channel. Our slides and handout are also available. 

We were so pleased by the turnout. Our workshop team (Dr. Adriana Picoral, Dr. Aleksandra Swatek, Dr. Ashley Velázquez, and Dr. Hadi Banat) is reviewing the feedback we got and planning our next event. Stay tuned! 

Dr. Adriana Picoral

Dr. Picoral was awarded a mini-grant for a series of professional development workshops designed to increase the gender inclusivity of the data science programs at the University of Arizona. The workshops will be hosted by Dr. Picoral in cooperation with two invited speakers. 

Ali Yaylali, Aleksey Novikov, and Dr. Banat wrote about Crow’s approach to data driven learning (DDL) in “Using corpus-based materials to teach English in K-12 settings,” published in TESOL’s SLW News for March 2021. This is our second piece for SLW News, following “Applying learner corpus data in second language writing courses,” written by Dr. Velázquez, Nina Conrad, Dr. Shelley Staples, and Kevin Sanchez in October 2020. 

Finally, Dr. Picoral, Dr. Staples, and Dr. Randi Reppen published “Automated annotation of learner English: An evaluation of software tools” in the March 2021 International Journal of Learner Corpus Research. Here’s the abstract:

This paper explores the use of natural language processing (NLP) tools and their utility for learner language analyses through a comparison of automatic linguistic annotation against a gold standard produced by humans. While there are a number of automated annotation tools for English currently available, little research is available on the accuracy of these tools when annotating learner data. We compare the performance of three linguistic annotation tools (a tagger and two parsers) on academic writing in English produced by learners (both L1 and L2 English speakers). We focus on lexico-grammatical patterns, including both phrasal and clausal features, since these are frequently investigated in applied linguistics studies. Our results report both precision and recall of annotation output for argumentative texts in English across four L1s: Arabic, Chinese, English, and Korean. We close with a discussion of the benefits and drawbacks of using automatic tools to annotate learner language.

Picoral, A., Staples, S., & Reppen, R. (2021). Automated annotation of learner English: An evaluation of software tools. International Journal of Learner Corpus Research, 7(1), 17–52. https://doi.org/10.1075/ijlcr.20003.pic

We thank all of the Crow researchers and Crow friends who supported this good work, and the editorial teams, reviewers, and funders who made it possible. 

About

The Corpus & Repository of Writing, an inter-institutional and inter-disciplinary research project building a corpus of student writing articulated with a repository of pedagogical artifacts.