Skip to main content

Scripts Of Access: Scripts Of Access: Fixing The Romanization Gap

Scripts Of Access
Scripts Of Access: Fixing The Romanization Gap
    • Notifications
    • Privacy
  • Issue HomeBridging Fields, Issue 2 (Spring/Summer 2025)
  • Journals
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Scripts Of Access: Fixing The Romanization Gap
  2. Bibliography

Scripts of Access: Fixing the Romanization Gap

Eric Jiefei Deng

MA, Near Eastern Studies, New York University, 2027

MSLIS, Palmer School of Information, Long Island University, 2027

This paper examines the enduring structural and usability issues surrounding the Romanization of non-Roman scripts in North American library catalogs, particularly through the lens of ALA-LC transliteration standards. Focusing on OPAC searchability, discoverability, and patron-centered design, it critiques the historical reliance on transcription systems that are often unintuitive, error-prone, and disconnected from the linguistic logic of the source languages. Using examples from JACKPHY+ and other multilingual collections, the paper explores how Model A cataloging—featuring parallel fields in both vernacular and Romanized script—can enhance access and equity. It further investigates the limitations of current transliteration practices for languages such as Persian, Korean, and Turkic dialects, and underscores the need to revise ALA-LC Romanization tables to reflect community usage and technological possibilities.

Libraries today are more interconnected than ever and with evolving RDA guidelines, the North American library is embracing its place in an ever more interconnected world.[1] Academic and research libraries play key roles in these networks as centers of resources and knowledge for the creation of original records for a wide variety of materials. As library budgets shrink, the ivory tower becomes more of a guiding lighthouse, providing metadata and records for resources that less well-endowed institutions are not staffed to handle.[2] This paper will discuss the trends in issues concerning non-Roman script materials in library catalogs, with detailed examples from all the JACKPHY+ languages, which now includes Thai and Armenian.[3] This overview of foundational, grandfathered in structural issues concerning transliteration of non-Roman scripts and metadata creation will be framed concerning the utility of the system for both patrons and librarians.[4] The guiding policy of patron needs and habits leads this paper to focus on the searching and title records of non-Roman records in OPACs. This paper argues for the utility of Model A non-Roman script cataloging practices that maintain parallel linked fields of Romanization and vernacular script in records.[5] Care is taken to acknowledge the particularities of specific languages and their distinct needs, but this paper hopes to highlight major issues that can be fixed. This paper will also present the potential of new retroactive “de-transliterating” projects to add vernacular script parallel titles to legacy records and bring catalogs in line with RDA guidelines.[6] This paper will advocate for the reform and overhaul of certain ALA-LC transcription tables and practices to occur in parallel with these retroactive “de-transliteration” projects. Improved metadata practices for non-Roman script records will result in more faceted, intuitive, and accessible catalogs and allow libraries to be key sites in the preservation and vitality of less-spoken languages.

Origins of Transliterary Cataloging

The origins of transliteration of non-Roman scripts into the Roman alphabet as the basis for North American cataloging can be found in the earliest guides with many pioneering figures in knowledge organization noting the need to address the specific challenges metadata from these materials posed.[7] The methods of handling non-Roman script materials can be seen as all falling on a scale balancing the competing needs of: indexing with the rest of the Latin script catalog, usability for library staff, discoverability for patrons, and compatibility with in-house knowledge organization methods/technology. This rough matrix of the above-mentioned heuristics is useful to compare the benefits and complications that the various methods and standards of dealing with non-Roman metadata have. For example, early recommendations by Charles A. Cutter of having separate catalogs for non-Roman (excluding Greek and Hebrew) languages paved over issues of compatibility but would segregate materials from the collection and hampered discoverability.[8] Later usage of card cataloging, mirroring modern Model B standard for cataloging, would contain vernacular script and only have minimal Romanization for certain fields in order to allow for collation with the rest of the Roman script catalog and for general library staff use.[9] The progression to OPACs and MARC standards allowed for digitization and increased efficiency and possibilities, but due to technological limits at the time surrounding operating systems, centralized Romanization was the foundation for digital cataloging of vernacular script materials.[10] This paper will go further into these issues and the lingering aftereffects of this stage of vernacular script metadata processing. Support for JACKPHY+CG scripts, along with other technological advancements, allowed for the inclusion of vernacular scripts digitally for the first time, but only as parallel sub fields to transliterated fields.[11] Even with the advancement of BIBFRAME and RDA changes in policy to require parallel subfields in vernacular script, lingering issues and challenges remain.[12] While the pace of change is slow, implementation is slower. The catalog is a product of this long history, and the legacy records one finds are artifacts of previous standards and practices. Retroactive implementation of current parallel field standards is a fruitful field of innovation concerning non-Roman cataloging, but I will also later argue is an opportunity to implement wider reforms concerning transliteration tables and standards for many languages.

Issues of Transliteration

The process of transliteration, by its nature transforming one script to another, is destructive in that no two writing systems are the same.[13] Even transliteration from alphabetic systems like Cyrillic or Armenian to Latin results in a loss of information encoded in the vernacular script.[14] Such losses in encoded meaning make transliteration, even when intuitive, difficult for non-specialists to parse.[15] Transliteration inherently strips information encoded into the original scripts into a new form that Latin script can represent. This means excluding tone for tonal languages,[16] which Latin script is inept at encoding, and emphasizing vowels in scripts that do not normally encode them, such as all the abjads—Arabic and Hebrew[17], and requiring the usage of word spacing, something that does not exist in many Southeast Asian scripts and Mixed Script Korean.[18] Romanization forces languages to do things that their vernacular scripts don’t do and gets rid of things that Latin script can’t do. Transcriptions are never the same as the original script and even a perfect transcription would still be unintuitive for native speakers to use as their languages are not usually represented in such a manner.[19] This is a major barrier to not only ease, but accessibility and usability.

ALA-LC Standards

All authority cataloguing for non-Roman languages follows the ALA-LC transcription tables, when available. These tables are unintuitive to native speakers and often result in unintelligible text. Different languages are also transliterated on different linguistic principles, some like Cyrillic, Greek, Armenian, Georgian—all alphabets—are systems that, when used correctly, result in one to one correspondences between Latin and the vernacular script.[20] Others, such as Chinese or Korean, in linguistics would more accurately be termed “transcriptions”—in that the Romanization is tied to pronunciation instead of the script.[21] In the Korean case, this causes much confusion as the Korean scripts native system, Hangul, is alphabetic—but ALA-LC standard rely on a Romanization system that transcribes the pronunciation of words, affected by sound changes and inflectional processes, instead of the underlying written forms. The complication of Korean transcription standards has resulted in the ALA-LC Korean Transcription Chart being the longest at over 67 pages.[22]

ALA-LC tables also seem to reflect trajectories and histories of US academia and area studies departments rather than native speaker usability. ALA-LC transcription standards for non-Slavic Cyrillic alphabets reflect a strong Russian influence and a lack of basis in the phonology and logic of the vernacular languages[23] that one might logically transliterate into Latin script. Similarly, Persian transcription tables are Romanized as if they were basically Arabic[24]—forcing catalogers to transcribe inaccurately and erroneously. Central Asia and the Caucasus provide the perfect illustration of these issues—transcription standards created without native speaker input/logic in mind—in that these languages are written in different scripts in different countries and eras. Tajik, Farsi, and Dari are all varieties of Persian that are mutually comprehensible but because their transcriptions come from the Persian table or the non-Slavic Cyrillic table, the same titles are transcribed in records completely differently. Azerbaijani and Uzbek are languages that can be written in Roman, Cyrillic, or Arabic script.[25] However, the ALA-LC tables for these languages result in completely different results depending on the starting language. Even Uzbek and Azeri in Latin script needs to be transcribed as not all letters used in these languages are supported by all systems.[26]

ALA-LC transcription is inconsistent for the same language often, and usually completely divorced from standard, common, or natively logical systems of transcription. In fact, ALA-LC transcription usually does not even align with the transcription systems of major journals in academic publishing.[27] ALA-LC transcription also forces languages with different standards across borders to all be transcribed the same, as long as it’s the same script, even if usage of characters is different in different countries. A good example of this would be Arabic, where variant characters semi-officially used for loan and dialect phonemes have different conflicting representations based on country—the current standard right now is to transcribe them based on Eastern Arabic style even if the materials come from North Africa. ALA-LC transcription standards also heavily rely often on diacritics and symbols that are uncommon and visually similar—resulting in high rates of error for many languages.[28]  ALA-LC transcription basically forces users to learn a new orthography solely for the purposes of searching in OPACs.[29] Transcription standards for Chinese, during the turn of the century, switched to the more commonly used Pinyin, although this was a slow change that took decades to approve.[30] This shows however, that change is possible—but it needs to be reality for more languages.

ALA-LC transcription standards lack intuition, are opaque, prone to error, and often designed without the potential patron in mind.[31] The primacy of the transcribed field in records over the vernacular script parallel fields also often result in title records in OPAC search results only showing Romanized text, even if the vernacular script is available in the full record.[32] This poses an additional view point on the critiques of transcription; if transcription is a blunt, inaccurate, and finicky tool to find what you are specifically looking for, it is an even worse tool for finding what is available in a collection.[33] Illegibility of title records is a major issue in access to collections. If a user is not aware of proper transcription, or improperly types a query, or if a record is transcribed improperly, the number of results in a search would vary considerably.[34] Additionally, reliance on transcription based records ignores cross linguistic connections that would otherwise be legible in vernacular script.[35] Classical Chinese materials are a good example of this as they are legible to Japanese and Korean patrons as well. Similar cases, though less so, exist for Ottoman Turkish, Persian, and Arabic materials on certain subjects such as religion.  A user might have difficulties when they are searching and know what should be there, but they are massively handicapped when they are searching and don’t know what is there. Catalog records are supposed to help patrons find the appropriate resource. Searches that result in not pulling up all or enough resources exacerbate the underutilization of non-Roman materials and usage metrics.[36] As written by James E. Agenbroad:

“Any romanization we use is at best an inconvenient hindrance and at worst a severe stumbling block between the user and the book. If this is so, then the next question must be: ‘Why romanize for filers if this inconveniences readers?’”[37]

Transcription in concept, however, does have its use when vernacular script poorly encodes pronunciation. Transcription reduces errors and difficulty when Romanization is more phonologically sound and consistent than the native script. A classical example of this would be Japanese, where words can be written in variant characters, variant scripts, and proper nouns can only be certainly pronounced through background knowledge and context.[38] Japanese is a classical “many to many” language, where many sounds correspond to many writing systems, correspond to many characters, and correspond to many meanings—often also arbitrarily as well.  Romanization provides a uniform way to collate and search for materials that are semantically and/or phonologically the same but orthographically very different.[39] A major caveat to this would be for tonal languages like Chinese or Thai, where Romanization standards ignore tone, and therefore are representative of Romanization systems that consistently, but incompletely, represent their languages. This results in them being poor transliterations of the vernacular characters and poor transcriptions of the sounds and meaning as well.[40] This is especially acute in Chinese, where formal and classical registers often rely on the vernacular script logographic characters to distinguish homophones—meaning is often reliant on the visual encoding of the script, not the audible reading, and therefore transcription, of the text. Even in Chinese, it is noted that in many Chinese OPACs, records will contain Romanization because the Romanization of names serves the purpose of indexing and sorting.[41] Transcribed fields provide additional tools for patron discoverability but cannot be the only, or arguably main tool.

New Systems and Future Goals

There has been significant and constant movement towards the implementation of a Model A system of records that contain parallel fields of both Romanized and vernacular script—along with a strengthened emphasis on discovery and vernacular script primacy.[42] This is in line with RDA guidelines that now require the inclusion of vernacular script in theory. In order to bring records in line with these new regulations, and in recognition of the benefits of parallel fields in records, as well as the limited manpower and budgets of most libraries, there have been many automated machine processes in development and testing to add missing vernacular script to older records.[43] These projects span from the promotion of macros and tools to allow non specialist staff or student workers work more meaningfully and impactfully with non-Roman materials to full on pilot projects testing the automation of reverse transliterating Romanized records back to vernacular script.[44] So far there have been interesting publications on the de-romanization of Cyrillic records in Russian and Ukrainian, as well as Armenian. The alphabetic nature of these scripts align well with Latin script so it makes sense that these languages are the starting point.[45] Issues and lessons learned from these projects provide interesting insight and expectations for other scripts where transcription is more divorced from the vernacular script.

Common errors in these projects often result from poor transliteration in the legacy record, historical changes to orthography and language, competing orthographic standards, confusions concerning diacritic use, and errors in transliteration due to interference from a related language. The need for significant preemptive record cleanup in legacy records has been noted for these Cyrillic and Armenian programs.[46] These projects highlighted how individuals with advanced linguistic skills were still needed to determine correct forms and resolve errors.[47] Proper nouns were a field that needed significant special attention. The absence of tools with reliable computational linguistics routines to automatically confirm correct spellings was noted. The projects also noted that in OCLC, a no small percentage of newly created records in Russian and Ukrainian still lacked vernacular script fields.[48] Such de-transliteration tools therefore might be useful not only for legacy records, but also for making sure new records follow updated RDA guidelines.  Further, such “de-transliterating” could provide vernacular script entry points to more intuitive discovery of non-Roman materials for patrons, especially native speakers.[49] If searching and discovering in the vernacular script is the most accurate and intuitive way of searching, it behooves libraries to try to provide as complete of a discovery experience as possible.

This is not to say that vernacular script searching is the only way one should search for materials in discovery layers. While vernacular searching is the most intuitive for the patron, searching with this method might not always cast the widest net—for reasons not directly tied to the problems of transliteration. Due to the historical inertia outlined in this paper, correct transliterated searches are still the most universal way to pull all records, including legacy records that have not been updated yet.[50] Japanese, due to its many to many correspondences between reading, meaning, and writing, serves as a key example—as outlined earlier—of the utility of transcribed fields in records.[51] Additionally, every ILS has different capacities and many OPACs have very localized and not obvious parameters in faceted searching and indexing that might result in more useful Romanized searches. Most academic and research libraries have LibGuides for patrons and often internally as well outlining the particularities of non-Roman materials. Such guides are crucial and need to be promoted as many younger library users are not used to searching in OPACs, and might not be aware how it is not as user friendly as a web browser search. Working with non-roman materials takes a few attempts for library experts to find what they are looking for sometimes, but for most users, these materials might as well be bibliographically lost.[52] 

Considering the Sisyphean challenge of addressing all the issues and developments concerning non-Roman script languages, RDA guidelines, transliteration, and usability, solutions will have to be dynamic and multipronged. RDA guidelines centering vernacular script, in comparison to the past, will go great lengths to allow for the full breadth of semantic meaning that vernacular scripts encode to be available to patrons.[53] Forefronting of vernacular script allows patrons who are familiar with a non-Roman language to easily read materials in the same cultural sphere. Distinct examples would be the cross linguistic legibility of the written language between Chinese, Japanese, and Korean Mixed Script.[54] Similar scenarios exist in the Middle East between Ottoman Turkish, Persian, and Arabic.[55] Additionally, for languages written in different scripts, transliteration should be unified. It is an additional barrier to discovery when the same title in the same language can have such different fields due to the country/script of origin.[56] The Turkic examples of Azeri and Uzbek come to mind—Ottoman Turkish already follows such a practice where Romanization emphasizes conversion into modern Turkish forms.[57] 

On transliteration standards, ALA-LC Romanization Tables need to be developed and revised to center user and language community oriented transliteration schemes. For certain languages like Persian, there have been vocal complaints and calls for change for over 30 years to have a transliteration scheme that is accurate and intuitive.[58] Such changes informed by community needs and competencies were implemented when Pinyin was enacted as the standard Romanization for Chinese.[59] ALA-LC has already shown its goals of incorporating and working with communities when developing transcription tables for newer scripts, such as ADLaM and Meetei.[60] Much like how OCLC and certain universities are retroactively updating their records to follow new principles, ALA-LC should update their transcription tables following their modern principles.

The need to train and hire multilingual professionals into the cataloging departments of libraries across the country cannot be understated. Institutions should also invest in their staff by encouraging cross linguistic capabilities, as many gaps in capacity can be filled by staff who know similar languages/ scripts.[61] Encouraging such cross linguistic capacities in staff, increases the capacity of the library. Historical items, multi-script materials, and evaluation and development of new tools become possible with knowledgeable and diverse cohorts. Libraries cannot just rely on copy cataloging and waiting for another library to do the work for them—especially with foreign materials. Foreign materials do not always have reliable bibliographic data, things such as ISBNs might not be as controlled or distinct as in North America—if items have ISBNs at all.[62] Institutions should also actively train and encourage their staff to stay up to date with the latest standards and tools available. Backlogs are not just caused by staff lacking the ability to process diverse materials but also by staff who are not confident to process materials quickly to increasingly more and more complex standards.[63]

The increased use of various macros and tools has helped understaffed libraries process more and more materials. It is important, however, to realize that even the most advanced transliteration and de-transliteration tools for most languages are not spelling correction algorithms nor natural language processing modules.[64] They are all empirical, data driven strategies to assist in the cataloger’s job. A qualified and trained librarian will still need to correct for errors, evaluate proper nouns, and make up for the gaps in the tools abilities.[65] It needs to be noted that much of the error correction is highly context reliant that it requires a trained professional to make the right judgement call.[66] For such a diverse array of languages, all that are not in roman script, there can be no single set of evaluation guidelines or strategies. Cataloguers need to stay flexible with guidelines and use their best judgement.

Due to historical inertia,[67] there is doubt that BIBFRAME will diverge too much from MARC-8’s limitations when it comes to non-Roman languages,[68] but that should not stop the testing and eventual inclusion of more scripts into the JACKPHY+ cohort.[69] Special attention should be paid to endangered and less-spoken languages, as it is often these languages that face the biggest challenges and rely on union catalogs and copy catalogs the most.[70] Further, Unicode integration will help encourage the collection and description of even more diverse catalogs. It needs to be highlighted that despite the long history of literary production and the large speaker base, no South Asian script references are in authority records.

Libraries need more funding not just for their basic needs, but for growth and efficiency.[71] Administrative staff also need to be aware of issues concerning non-Roman languages, not just users and cataloguers. It is through wider awareness that improvements can be made to structures and products and faster change and implementation can be enacted.[72] Foundational issues related to poor transcription standards lead to larger downstream issues such as error prone transcription delaying modern development and tools such as de-transcription workflows.[73] 

Cataloguers should take advantage of the recent changes[74] and shifts in foci to the promotion of cultural and linguistic diversity of collections, centering of user experience and enhanced discoverability, meeting the needs of users on a global scale, and the promotion of Unicode as a standard for cataloging and metadata[75] to also fix foundational problems in current transcription standards encoded in the ALA-LC Romanization tables.[76] There needs to be a serious reevaluation of transliteration schemes as Romanization does have its uses in metadata searching and indexing—though this does not mean that we can ignore major issues with many of the tables. Changes to non-Roman script representation in bibliographic records does not simply solve the problems inherent in the transliteration system itself that spurred on the change. There needs to be a tabula rasa of the transliteration tables for collections to meet the lofty goals libraries and librarians hold ourselves to.

In sum, a patron-based approach to non-Roman records is needed in order to fulfill the library’s mission as a public good in an equitable and fair way. Systematic and foundational issues around transliteration need to be assessed, with remediation implemented in a timely and concurrent manner with other ongoing changes. Technology has progressed and changed, allowing for more faceted and intuitive digital representation of non-Roman languages, and bibliographic cataloguing needs to catch up. Cohesive, useful description and metadata is essential for discovery and academic inquiry. It is only through the elimination of legacy processes and hurdles that academic inquiry and sharing can be facilitated to the maximum extent.[77] 

Author Bio:

Eric Jiefei Deng was born in Knoxville, Tennessee and raised in Northern New Jersey. He is a student in the NYU–LIU Dual Degree Program, pursuing an M.A. in Near Eastern Studies and an MLIS. He holds a B.A. in Political Science from Sciences Po Paris and a B.A. in History from Columbia University. Prior to graduate school, he worked at the Social Science Research Council and Columbia Law School’s Arthur W. Diamond Law Library. His research focuses on nationalism, identity, language, and institutions across diverse regions and time periods.


[1] Arlene G. Taylor, Daniel N. Joudrey, and Katherine M. Wisser, The Organization of Information, Fourth edition, Library and Information Science Text Series (Santa Barbara, California: Libraries Unlimited, an imprint of ABC-CLIO, LLC, 2018). p.315

[2] Joy DuBose, “Russian, Japanese, and Latin Oh My! Using Technology to Catalog Non-English Language Titles,” Cataloging & Classification Quarterly 57, no. 7–8 (November 17, 2019): 496–506, https://doi.org/10.1080/01639374.2019.1671929.

[3] “Updated SCS Policy Recommendations on Non-Latin Script Cross-Reference Special Coding Practice in the LC/NACO Name Authority File” (PCC Standing Committee on Standards, July 27, 2023), https://www.loc.gov/aba/pcc/scs/documents/scs-recommendations-non-latin-script-cross-reference-coding-practice.pdf.

[4] Jenny Toves, Roman Tashlitskyy, and Lana Soglasnova, “The Ukrainian Kyrylytsia, Restored: An Automation Project for Adding the Cyrillic Fields to Ukrainian Records in OCLC WorldCat,” East/West: Journal of Ukrainian Studies 8, no. 2 (2021): 307–20, https://doi.org/10.21226/ewjus626.

[5] Iman Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential” (PowerPoint, ALA Midwinter Conference, Philadelphia, PA, January 26, 2020), https://www.loc.gov/aba/pcc/documents/PCC-Participants-Midwinter-2020-Dagher.pptx.

[6] Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored.”

[7] James E. Agenbroad, “Romanization Is Not Enough,” Cataloging & Classification Quarterly 42, no. 2 (June 5, 2006): 21–34, https://doi.org/10.1300/J104v42n02_03.

[8] Agenbroad.

[9] Agenbroad.

[10] DuBose, “Russian, Japanese, and Latin Oh My! Using Technology to Catalog Non-English Language Titles.”

[11] Peter V. Fletcher and Jenny Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog: Notes, Observations, and Conclusions,” Cataloging & Classification Quarterly 61, no. 3–4 (May 19, 2023): 346–57, https://doi.org/10.1080/01639374.2023.2229823.

[12] Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential.”

[13] YooJin Ha, “A Study on User Satisfaction with CJK Romanization in the OCLC WorldCat System,” Journal of the Korean Society for Information Management 27, no. 2 (June 1, 2010): 95–115, https://doi.org/10.3743/KOSIM.2010.27.2.095.

[14] Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored”; Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog.”

[15] Ha, “A Study on User Satisfaction with CJK Romanization in the OCLC WorldCat System”; Wooseob Jeong, Joy Kim, and Miree Ku, “Spaces in Korean Bibliographic Records: To Be, or Not to Be,” Cataloging & Classification Quarterly 47, no. 8 (October 9, 2009): 708–21, https://doi.org/10.1080/01639370903203382; Agenbroad, “Romanization Is Not Enough”; “Multilingual Library – A Blog Devoted to Multilingual Issues in the Library Catalog,” August 28, 2024, https://elegantlexicon.com/lib/; Fereshteh Molavi, “Main Issues in Cataloging Persian Language Materials in North America,” Cataloging & Classification Quarterly 43, no. 2 (December 8, 2006): 77–82, https://doi.org/10.1300/J104v43n02_06.

[16] Chalermsee Olson, “Cataloging Southeast Asian Language Materials: The Case of the Thai Language,” Cataloging & Classification Quarterly 22, no. 2 (July 29, 1996): 19–28, https://doi.org/10.1300/J104v22n02_03.

[17] Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential”; Edward A. Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish),” Cataloging & Classification Quarterly 17, no. 1–2 (December 14, 1993): 133–48, https://doi.org/10.1300/J104v17n01_09; Molavi, “Main Issues in Cataloging Persian Language Materials in North America.”

[18] Olson, “Cataloging Southeast Asian Language Materials”; Hollie White and Songphan Choemprayong, “Thai Catalogers’ Use and Perception of Cataloging Standards,” Cataloging & Classification Quarterly 57, no. 7–8 (November 17, 2019): 530–46, https://doi.org/10.1080/01639374.2019.1670767; Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records.”

[19] Ha, “A Study on User Satisfaction with CJK Romanization in the OCLC WorldCat System”; Taylor, Joudrey, and Wisser, The Organization of Information. p.168

[20] Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog.”

[21] Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records”; Joann S. Young, “Chinese Romanization Change: A Study on User Preference,” Cataloging & Classification Quarterly 15, no. 2 (November 11, 1992): 15–35, https://doi.org/10.1300/J104v15n02_03.

[22] Lia Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples” (PowerPoint, Pennsylvania Convention Center, Nutter Theater, January 26, 2020), http://www.loc.gov/aba/pcc/documents/PCC-Participants-Midwinter-2020-Contursi.pptx.

[23] “Multilingual Library – A Blog Devoted to Multilingual Issues in the Library Catalog.”

[24] Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish)”; Molavi, “Main Issues in Cataloging Persian Language Materials in North America.”

[25] Michael Walter, “Central Asian Cataloging: Problems and Prospects,” Cataloging & Classification Quarterly 17, no. 1–2 (December 14, 1993): 149–58, https://doi.org/10.1300/J104v17n01_10.

[26] “Updated SCS Policy Recommendations on Non-Latin Script Cross-Reference Special Coding Practice in the LC/NACO Name Authority File.”

[27] Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish)”; Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential.”

[28] Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored.”

[29] Agenbroad, “Romanization Is Not Enough.”

[30] Young, “Chinese Romanization Change.”

[31] Paul Frank, “Romanization: What Are We Gaining? What Are We Losing?” (PowerPoint, Pennsylvania Convention Center, Nutter Theater, January 26, 2020), http://www.loc.gov/aba/pcc/documents/PCC-Participants-Midwinter-2020-Frank.ppt.

[32] Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential.”

[33] Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records.”

[34] Jeong, Kim, and Ku.

[35] Jeong, Kim, and Ku; Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish)”; Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential”; Ha, “A Study on User Satisfaction with CJK Romanization in the OCLC WorldCat System.”

[36] Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records.”

[37] Agenbroad, “Romanization Is Not Enough.”

[38] Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples.”

[39] “When CJK Metadata Gets Left Behind,” The Digital Orientalist (blog), January 24, 2023, https://digitalorientalist.com/2023/01/24/when-cjk-metadata-gets-left-behind/; Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples.”

[40] Clément Arsenault, “Pinyin Romanization for OPAC Retrieval—Is Everyone Being Served?,” Information Technology & Libraries 21, no. 2 (June 2002): 45–50; Olson, “Cataloging Southeast Asian Language Materials.”

[41] Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples.”

[42] Magda El-Sherbini, “Improving Resource Discoverability for Non-Roman Language Collections” (Subject Access: Unlimited Opportunities, Columbus, Ohio, USA, 2017), https://library.ifla.org/id/eprint/1982/.

[43] Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog”; Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored.”

[44] Sherab Chen, “Empowering Student Assistants in the Cataloging Department through Innovative Training: The E-Learning Courseware for Basic Cataloging (ECBC) Project,” Cataloging & Classification Quarterly 46, no. 2 (June 1, 2008): 221–34, https://doi.org/10.1080/01639370802177646; DuBose, “Russian, Japanese, and Latin Oh My! Using Technology to Catalog Non-English Language Titles.”

[45] Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog”; Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored.”

[46] Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored.”

[47] Toves, Tashlitskyy, and Soglasnova; Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential”; Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples”; Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish).”

[48] Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog.”

[49] “Multilingual Library – A Blog Devoted to Multilingual Issues in the Library Catalog”; Olson, “Cataloging Southeast Asian Language Materials”; Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish).”

[50] Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records”; Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog.”

[51] “When CJK Metadata Gets Left Behind”; Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples.”

[52] Ha, “A Study on User Satisfaction with CJK Romanization in the OCLC WorldCat System”; Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records.”

[53] Dagher, “Path to Discovery! Romanization & Scripts for Non-Latin/Arabic Materials Challenges & Potential”; Michele Seikel, “No More Romanizing: The Attempt to Be Less Anglocentric in RDA,” Cataloging & Classification Quarterly 47, no. 8 (October 9, 2009): 741–48, https://doi.org/10.1080/01639370903203192; Taylor, Joudrey, and Wisser, The Organization of Information.

[54] Contursi, “Importance of Romanization in CJK Records: Pros and Cons with Some Examples”; Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records.”

[55] Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish)”; James Tilio Maccaferri, “Ottoman Turkish Cataloging I: Romanization,” Cataloging & Classification Quarterly 5, no. 4 (October 2, 1985): 17–38, https://doi.org/10.1300/J104v05n04_02.

[56] Walter, “Central Asian Cataloging.”

[57] Maccaferri, “Ottoman Turkish Cataloging I.”

[58] Molavi, “Main Issues in Cataloging Persian Language Materials in North America”; Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish)”; “Persian Romanization - MELA ConC,” May 23, 2023, https://mela.us/conc/cataloging-manuals/persian/romanization/.

[59] Arsenault, “Pinyin Romanization for OPAC Retrieval—Is Everyone Being Served?”; Young, “Chinese Romanization Change.”

[60] Beacher Wiggins, “Developing ALA-LC Romanization Tables Alongside New Technologies for Improved Discovery: Example: ADLaM” (PowerPoint, ALA Core Committee on Cataloging: African and Asian Materials & ALA-LC Romanization Table Review Board, Virtual, August 23, 2023), https://connect.ala.org/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=b2e939f5-7dad-e641-36cf-6898171fa5b8&forceDialog=0.

[61] Chen, “Empowering Student Assistants in the Cataloging Department through Innovative Training”; Eileen G. Abels, Lynne C. Howarth, and Linda C. Smith, “Envisioning Our Information Future and How to Educate for It,” Journal of Education for Library and Information Science 57, no. 2 (2016): 84–93.

[62] Olson, “Cataloging Southeast Asian Language Materials.”

[63] White and Choemprayong, “Thai Catalogers’ Use and Perception of Cataloging Standards.”

[64] Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored.”

[65] Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog.”

[66] Walter, “Central Asian Cataloging”; Jajko, “Cataloging of Middle Eastern Materials (Arabic, Persian, and Turkish)”; “Updated SCS Policy Recommendations on Non-Latin Script Cross-Reference Special Coding Practice in the LC/NACO Name Authority File.”

[67] Frank, “Romanization: What Are We Gaining? What Are We Losing?”

[68] “Multilingual Library – A Blog Devoted to Multilingual Issues in the Library Catalog.”

[69] Olson, “Cataloging Southeast Asian Language Materials”; Agenbroad, “Romanization Is Not Enough”; “Multilingual Library – A Blog Devoted to Multilingual Issues in the Library Catalog”; Olson, “Cataloging Southeast Asian Language Materials”; Usha Bhasker, “Languages of India: Cataloging Issues,” Cataloging & Classification Quarterly 17, no. 1–2 (December 14, 1993): 159–68, https://doi.org/10.1300/J104v17n01_11.

[70] “Multilingual Library – A Blog Devoted to Multilingual Issues in the Library Catalog”; Wiggins, “Developing ALA-LC Romanization Tables Alongside New Technologies for Improved Discovery: Example: ADLaM.”

[71] Abels, Howarth, and Smith, “Envisioning Our Information Future and How to Educate for It.”

[72] “SOUTH ASIA COOPERATIVE COLLECTION DEVELOPMENT WORKSHOPS STATEMENT ON CATALOGING AND METADATA SPRING 2021” (South Asian Cooperative Collection Development Workshops (SACOOP), 2021), https://guides.lib.utexas.edu/ld.php?content_id=60138108; Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records.”

[73] Toves, Tashlitskyy, and Soglasnova, “The Ukrainian Kyrylytsia, Restored”; Fletcher and Toves, “An Automated Cyrillic Script Project to Increase Non-Latin Access in the Catalog.”

[74] “Policy Issues on Non-Latin Script Input in BIBFRAME of FOLIO” (Library of Congress, November 3, 2023), http://www.loc.gov/aba/cataloging/romanization/Policy-Issues-NonLatin-BF-FOLIO.pdf.

[75] Jessalyn Zoom, “Embracing AuthenticityNon-Latin Script Input in BIBFRAME and FOLIO” (PowerPoint, PCC Operations Committee, Library of Congress, May 2, 2024), https://www.loc.gov/aba/pcc/documents/OpCo-2024/Zoom-Embracing-Authenticity.pdf.

[76] Walter, “Central Asian Cataloging”; Molavi, “Main Issues in Cataloging Persian Language Materials in North America”; Jeong, Kim, and Ku, “Spaces in Korean Bibliographic Records”; Ha, “A Study on User Satisfaction with CJK Romanization in the OCLC WorldCat System.”

[77] “South Asia Cooperative Collection Development Workshops Statement on Cataloguing and Metadata” (South Asia Cooperative Collection Development Workshops (SACOOP), Spring 2021), https://guides.lib.utexas.edu/ld.php?content_id=60138108.

Annotate

Next Chapter
Bibliography
Next
Articles
Bridging Fields Journal © 2025 is licensed under CC BY 4.0. To view a copy of this license, visit Creative Commons Attribution-NonCommercial 4.0 International.
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org