========================================================================= Date: Tue, 2 Feb 1993 14:27:42 CST Reply-To: Peter Flynn Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Peter Flynn Subject: Re: getting the curia dtd attached to the tei dtds I'm nearly there, thank you all for your help. As I said, I've defined %chowder to refer to the assorted fishy bits we are using for content markup. Now all I have to do is enable our dtd to be attached to the TEI chain. It's probably in the docs (heh) but I don't have time to RTFM right now, so excuse what may seem to be dim questions. The TEI constructs assume (I think, or rather, sgmls needs): [SGML.dec] [TEI1.dtd]--->[lots of other TEI dtd files] and eventually etc etc the encoded text itself Correct? OK, where do I slip in our own mods? Currently I have [SGML.dec] [curia.dtd]--->[TEI1.dtd and lots of other TEI dtd files] but we need to go etc etc the encoded text itself or something like, because at some stage we need to escape from the construct of ...... so that we can branch off into stufflotsa stuff How do I make this branch out of the TEI framework into our own? Clearly, I could bodge TEIbase1.dtd but I don't want to do that. What's the TEI.additional entry there for? Can I slip things in here, and if so, how can I still make branch out of its stuff without doing damage to people wanting to read other TEI-compliant files than our own? Or is is as simple as ///Peter ========================================================================= Date: Tue, 2 Feb 1993 14:28:48 CST Reply-To: Susan Hockey Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Susan Hockey Subject: CETH 1993 Summer Seminar in Electronic Texts in the Humanities [This is being cross-posted to several lists. Apologies if you receive it more than once: --SH] CENTER FOR ELECTRONIC TEXTS IN THE HUMANITIES Electronic Texts in the Humanities: Methods and Tools The Second Annual Summer Seminar at Princeton University, New Jersey August 1-13, 1993 organized by The Center for Electronic Texts in the Humanities, Princeton and Rutgers with the co-sponsorship of the Centre for Computing in the Humanities, University of Toronto The Center for Electronic Texts in the Humanities (CETH) is again offering an intensive two-week seminar during August 1993. The seminar will address a wide range of challenges and opportunities that electronic texts and software offer to teachers, scholars and librarians in the humanities. Discussions on the capture, markup, retrieval, presentation, transformation, and analysis of electronic text will prepare students for extensive hands-on experience with illustrative software, e.g., MTAS, Micro-OCP, WordCruncher, Tact, and hypertext. Resources on CD-ROM and Internet, such as the OED, Perseus, CDWORD, and several large textual collections in classical Greek, Latin, French, Italian, and English, will be demonstrated so that participants may make informed evaluations of their significance in the light of current and future technologies. Approaches to markup, from ad hoc schemes to the systematic design of the Text Encoding Initiative, will be surveyed and considered. The focus of the Seminar will be practical and methodological, with the immediate aim of assisting participants in their own teaching, research, and advising. It will be concerned with the demonstrable benefits of using electronic texts, with typical problems and how to solve them, and with the ways in which software fits or can be adapted to common methods of textual study. Participants will be expected to work on coherent projects, preferably of their own devising, and will be given the opportunity to present them on the last day. Throughout the Seminar, the instructors will provide assistance with designing projects, locating sources for texts and software, and solving practical problems. Ample computing facilities will be available 24 hours per day. A small library of essential articles and books in humanities computing will be on hand to supplement printed seminar materials, which include an extensive bibliography. Special lectures will describe current research in the field and address research topics, as well as the role of the library in the use of electronic texts. The Seminar is intended for faculty, students, librarians, technical advisers, and academic administrators with direct responsibilities for humanities computing support. It assumes basic computing experience but not necessarily with the application of computers to academic research and teaching. The number of participants will be limited to 30. Provisional Schedule Week 1, August 1-6, 1993 Sunday, August 1. Registration and introductions Monday, August 2. The electronic text a.m. What electronic texts are and where to find them; survey of existing inventories, archives, and other current resources. History of computer-assisted text analysis in the humanities. Introduction to simple concordancing with MTAS, including practical session. p.m. Creating and capturing texts in electronic form; keyboard entry vs. optical scanning. Demonstration of optical character-recognition technology. Introduction to text encoding, surveying ad hoc methods, e.g. COCOA, WordCruncher, TLG beta code; problems of these methods. Practical exercise in deciding what to encode in typical texts. Tuesday, August 3. Concordancing a.m. A focussed look at computer-assisted concordance generation; types of concordances, their specific advantages and disadvantages. Alphabetization, character sequences, sorting, and forms of presentation. Introduction to Micro-OCP; practical session in its use. p.m. Further work on concordancing with Micro-OCP. Wednesday, August 4. The interactive concordance a.m. Indexed, interactive retrieval vs. batch concordance generation. Textual problems and interpretative approaches particularly suitable to an interactive system; the continuing use of concordances in hardcopy. Preparation of text for indexed retrieval; differing roles of markup and external "rules"; kinds of displays and their augmentation through post-processing. Introduction to Tact. p.m. Practical work using Tact: simple markup, compilation of a textual database, and methods of inquiry. Thursday, August 5. Stylistics; SGML a.m Stylistic comparisons and authorship studies using concordance tools; basic statistics for lexical and stylistic analysis. Case studies, e.g. Federalist Papers, Kenny on Aristotle, Burrows on Jane Austen. p.m. Introduction to the Standard Generalized Markup Language (SGML) and the Text Encoding Initiative (TEI). Document structure and SGML elements. Start-tags, end-tags, and empty tags. Document type declarations. Group tagging of simple examples. SGML entities and their uses: character representation, boilerplate text, file management. Introduction to TEI Core tags and base tags for prose. Group tagging of examples using TEI tags. Friday, August 6. SGML and TEI a.m. The TEI Header: documentation for electronic texts. The file description; the encoding description; the text profile; the revision history. Overview of the TEI DTDs: base tag sets, additional tag sets, and auxiliary document types. p.m. Using TEI in practice. Overview of available commercial and public-domain software (the latter will be distributed to participants). Creating TEI texts; validation; processing. Tools for processing SGML texts: commercial and public-domain. Examples: translating a TEI text into COCOA (for OCP), Word-Cruncher format, TACT format. Practical session creating and validating TEI-conformant texts. Week 2, August 9-13, 1992 Monday, August 9. Scholarly editions a.m. Overview of tools for preparing critical editions. Constructing glossaries and material for commentary; application of Micro-OCP and/or Tact. Collation; single-text vs. multiple-text methods. Overview of software tools. Introduction to Collate. p.m. Electronic publication. Discussion of methods and implications. Tuesday, August 10. Electronic Dictionaries a.m. The electronic dictionary; from machine-readable dictionary to computational lexicon. What the New OED and other online dictionaries can do for the scholar. Uses of lexical knowledge bases in text retrieval. Building a simple online lexicon with Tact. p.m. Individual project work. Wednesday, August 11. Hypertext a.m. Hypertext and hypermedia: techniques of presentation and organization of textual data for analysis; possible combinations of hypertext and concordancing methods. Reading and writing the hypertextual book; hypertextual note-taking and annotating. Practical introduction to constructing a hypertext. p.m. Further practical session on building a hypertextual system. Demonstration and discussion of Perseus, StorySpace and Voyager texts. Thursday, August 12. Evaluation; Projects a.m. Review of the previous week's work. Discussion on the limitations of existing software. Advanced analytical tools not commonly available, e.g. pattern recognizers, lemmatization systems, morphological analyzers, parsers; overview of these. The contributions of computational linguistics and artificial intelligence, and where research in these areas is headed. Examination of some existing resources. p.m. Completion of project work. Friday, August 13. Projects a.m. Presentation of participants' projects. p.m. Concluding discussion of basic questions. What from a scholarly and methodological perspective is to be gained? What are the probable effects on research and teaching? What can one learn from the collision of automatic methods with intuitive perceptions? What it is the role of humanities computing: merely an efficient facilitator of traditional work or a fundamental component for pursuing new questions? Where do we go from here with software, and with its application? How can the machine better assist us in educating the imagination? The Center for Electronic Texts in the Humanities The Center for Electronic Texts in the Humanities was established in October 1991 by Rutgers and Princeton Universities with external support from the Mellon Foundation and the National Endowment for the Humanities. As a national focus of interest in the U.S. for those who are involved in the creation, dissemination and use of electronic texts in the humanities, it also acts as a national node on an international network of centers and projects which are actively involved in the handling of electronic texts. Developed from the international inventory of machine-readable texts which was begun at Rutgers in 1983 and is held on RLIN, the Center is now reviewing the records in the inventory and continues to catalog new texts. The acquisition and dissemination of text files to the community is another important activity, concentrating on a selection of good quality texts which can be made available over Internet with suitable retrieval software and with appropriate copyright permission. The Center also acts as a clearinghouse on information related to electronic texts, directing enquirers to other sources of information. Instructors The seminar will be taught by Susan Hockey and Willard McCarty, with assistance from Michael Sperberg-McQueen (SGML and TEI), Elli Mylonas (Hypertext) and staff of Computing and Information Technology, Princeton. Susan Hockey is Director of the Center for Electronic Texts in the Humanities. Before moving to the USA in October 1991, she spent 16 years at Oxford University Computing Service where her most recent position was Director of the Computers in Teaching Initiative Centre for Textual Studies. At Oxford she was responsible for various humanities computing projects including the development of the Oxford Concordance Program (OCP), an academic typesetting service for British universities, and OCR scanning. She has taught courses on humanities computing for fifteen years and has given numerous guest lectures on various aspects of computing in the humanities. She is the author of three books and numerous articles on humanities computing and has been Chair of the Association for Literary and Linguistic Computing since 1984. She is a member (currently Chair) of the Steering Committee of the Text Encoding Initiative. Willard McCarty has been active in humanities computing since 1977. With its founding Director, Ian Lancashire, he helped to set up the Centre for Computing in the Humanities, University of Toronto, of which he is now the Assistant Director. He was the founding editor of Humanist, the principal electronic seminar for computing humanists, and has edited several other publications in the field. He regularly gives talks, papers, and lectures throughout North America and Europe. McCarty took his Ph.D. in English literature in 1984; his current literary research is in classical studies, especially the Metamorphoses of Ovid. In support of a forthcoming book, he has an electronic edition of that poem underway for the text-retrieval program Tact. Elli Mylonas is a Research Associate in Classics at Harvard University, and is currently the Managing Editor of the Perseus Project. She has co-taught tutorials on "Teaching with Hypertext" at the Hypertext meetings in San Antonio and Milan (1991, 1992). In addition to coordinating the Perseus Project, her responsibilities cover the creation and structuring of the textual component of the project, and working together with the user interface designers and documentation specialists. She is the project leader for Pandora, a Macintosh search program for the TLG and PHI disks. Elli Mylonas is a founding member and one of the two organizers of CHUG (Computing in the Humanities User's Group), a humanities computing seminar that has been meeting biweekly at Brown University for the last 4 years. She is also on the Text Representation Committee of the Text Encoding Initiative, where she has worked on identifying SGML structures for tagging reference systems, drama and verse in literary texts. She has published and spoken on hypertext, descriptive markup and literary texts, and the use of computers in education. C. M. Sperberg-McQueen studied Germanic medieval literature in the comparative literature program at Stanford University; since 1980 he has been working to bring computing technology to bear on problems of textual research. In 1985 and 1986, he served as a consultant for humanities computing in the Princeton University Computer Center; since 1987 he has worked at the academic computer center at the University of Illinois at Chicago, where he is now a senior research programmer. He is a member of the steering committee, and the editor in chief, of the Text Encoding Initiative. Fees The cost of participating in this Summer Seminar will be $895, including tuition, use of computer facilities, student accommodation, breakfast and lunch at Princeton for the two weeks, and banquet and reception. Students pay a reduced rate of $795. For those who prefer hotel accommodations, the cost is $645 to cover tuition, lunch, the banquet and reception, and $565 for students. There will be 24-hour access to networked microcomputers in the student accommodation throughout the seminar. Application Procedure To apply for participation in this Summer Seminar, submit a one-page statement of interest. The statement should indicate (1) how participation in the Seminar would be relevant for your teaching, research, librarianship, advising or administrative work, and possibly that of your colleagues; (2) what project you would like to undertake during the Seminar, or what area of the humanities you would most like to explore; and (3) the extent of your computing experience. Applications must be attached to a cover sheet specifying your name, current institutional affiliation and position, postal and email addresses, and phone and fax numbers, as available, as well as natural language interest and computing experience. Currently enrolled students must also include a photocopy of a valid student ID. E-mail submissions should have a subject line `Summer Seminar Application'. The statement must be received by the reviewing committee, consisting of members of the Center's Governing Board, by APRIL 15, 1993, at the address below. Those who have been selected to attend will be notified by May 15, 1993. Payment will be requested at this time. Summer Seminar 1993 Center for Electronic Texts phone: (908) 932-1384 in the Humanities fax: (908) 932-1386 169 College Avenue bitnet: ceth@zodiac New Brunswick, NJ 08903 internet: ceth@zodiac.rutgers.edu USA ========================================================================= Date: Tue, 2 Feb 1993 14:30:52 CST Reply-To: "Bradford A. Morgan" Sender: "TEI-L: Text Encoding Initiative public discussion list" From: "Bradford A. Morgan" ----------------------------Original message---------------------------- South Dakota School of Mines and Technology POSITION VACANCY VICE PRESIDENT An academic vice president is sought for this public science and engineering university dedicated to excellence. The 2,500 undergraduate and graduate students are involved in 32 degree programs, through the Ph.D. in some disciplines. Students have combined average ACT scores of 25. Located in the Black Hills near Mount Rushmore and the Badlands, and not far from the Big Horn Mountain Range of Wyoming, greater Rapid City is a community of 85,000 with a favorable quality-of-life/cost-of-living ratio in a forested setting with ponderosa pines, Black Hills spruce, hiking trail networks, dinosaur relics, Sioux tradition, and national caves. APPLICATIONS and NOMINATIONS The selection of the Vice President of the South Dakota School of Mines and Technology should serve to impact the institution to better effect a positive environment by promoting professional development and global telecommunications, including the following criteria: 1. An awareness of innovative learning methodologies existing in the engineering, science, and humanities curricula elsewhere and their transferability to curricula here. 2. An awareness of computer communications such as the Internet which has particular impact on overcoming geographical isolation experienced by students and faculty at this institution. 3. The willingness to support faculty as they innovatively explore new teaching/learning strategies. 4. An awareness of faculty development programs which seek to strengthen classroom teaching and research abilities. 5. A recognition of the necessity for developing on-going and continual faculty-student-administrative dialogue on issues of all kinds as they both affect these groups and as they pertain to a stimulating environment intellectually for these groups. 6. An awareness of the complexity of issues and the willingness to take risks which push our institution into the national area as a premier undergraduate institution. 7. A desire to cultivate and affirm diverse attitudes, personalities, ideas, and temperaments to benefit students and faculty alike, allowing us to grow in social consciousness and awareness complementary to our solid scientific and technical backgrounds. Candidates for the position must have an earned doctorate in a discipline of engineering or science and possess the administrative experience necessary. Nominations will be accepted. Applicants should submit a cover letter explaining interest in the position along with names, addresses, and phone numbers of at least five references, and a statement of goals, to: Dr. Douglas K. Lange and Dr. Harold D. Orville Co-Chairs, VP Search and Screen Committee South Dakota School of Mines and Technology 501 E. St. Joseph Street Rapid City, SD 57701-3995 Review of the applications begins February 15, 1993, and will continue until a suitable candidate is hired. An Equal Opportunity Employer. ========================================================================= Date: Tue, 9 Feb 1993 14:15:18 CST Reply-To: "C. M. Sperberg-McQueen" Sender: "TEI-L: Text Encoding Initiative public discussion list" From: "C. M. Sperberg-McQueen" Subject: Re: getting the curia dtd attached to the tei dtds In-Reply-To: Your message of Tue, 2 Feb 1993 14:27:42 CST On Tue, 2 Feb 1993 14:27:42 CST Peter Flynn said: >I'm nearly there, thank you all for your help. > >The TEI constructs assume (I think, or rather, sgmls needs): > > [SGML.dec] > [TEI1.dtd]--->[lots of other TEI dtd files] > and eventually > etc etc > > the encoded text itself > > >Correct? ... Correct. >... OK, where do I slip in our own mods? Currently I have > ... >How do I make this branch out of the TEI framework into our own? Clearly, I >could bodge TEIbase1.dtd but I don't want to do that. What's the TEI.additional >entry there for? Can I slip things in here, and if so, how can I still make > branch out of its stuff without doing damage to >people wanting to read other TEI-compliant files than our own? Or is is as >simple as > > 3 make a file containing the part of TEI1.DTD which defines the entities for classes of elements -- and possibly also the entity global.attributes, if you want to change that. Let us call this LOCents.DTD. In the copy, change the entity declarations so that your additional elements are included where you want them. (In your case you may wish just to define %soup; as including %chowder;, or as being identical to it. (Remember to respect the declaration-before-use rules of SGML.) So far we have the following: one file containing your new stuff, one or more containing exactly the same elements as in the corresponding TEI DTD files, but with your mods, and a final file containing your mods to the element class system used in the DTDs. We have thus obeyed the sacred software-maintenance injunction "Keep *your* stuff (your mods) separate from *their* stuff (the unmodified materials)." Now we can put fire to the fuse: 4 Let your document type declaration look like the following: %CURIA.classes; ]> Alternatively, if you don't like the thought of having all this at the top of every file, put all the part of it that occurs between the square brackets into another file called, say, CURmod.dtd, and let your documents begin thus: %dtdmods; ]> By showing, in the DTD subset, exactly where local files are overriding the standard TEI files, we make it much clearer for later users of the document to figure out where the tagging is the same as the TEI P1 tag set, and where it has been changed. The methods being prepared for P2 will make such changes slightly easier to do without actually touching *any* of the standard files, and continue to flag the places where the local DTD varies from the standard. Good luck. Michael Sperberg-McQueen ========================================================================= Date: Tue, 9 Feb 1993 14:21:55 CST Reply-To: John Price-Wilkin Sender: "TEI-L: Text Encoding Initiative public discussion list" From: John Price-Wilkin Subject: Mac to entity references I needed to put something together in lex to convert Mac special characters to SGML entity references and thought it might be of general value to the group. It's pretty straightforward and easily modified, with entries in the following format: \200 printf("Ä"); \201 printf("Å"); \202 printf("Ç"); \203 printf("É"); \204 printf("Ñ"); It only covers values 200 and above. I've put it out for anonymous ftp on bowers.lib.Virginia.EDU in pub as mac2tei.lex. As much as possible, I've drawn the entity names from the TEI declarations, but it's also been necessary to turn to such things as ISOtech and Latin1/2. Any suggested changes will be gratefully accepted. John Price-Wilkin ========================================================================= Date: Fri, 12 Feb 1993 17:36:24 CST Reply-To: Syd Bauman Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Syd Bauman Subject: Re: getting the curia dtd attached to the tei >You have hit the major gotcha of TEI P1. The mechanisms required to >make the DTDs easily modifiable in the way you need are not in place in >the P1 DTDs. Sorry, but we'll have them in by the time P2 is done. > >You may be proud, or appalled, to know that you are the first person I >know of who has run into these problems. Perhaps no one else is >actually trying to use the P1 DTDs for serious work. Hey, dude--the reason some of us haven't put much effort into P1 and are waiting for P2 is precisely that we want easier modification! :-) ========================================================================= Date: Wed, 17 Feb 1993 14:13:05 CST Reply-To: Paul Mangiafico Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Paul Mangiafico Subject: E-text information via Gopher and FTP We thought this announcement might be of interest to your community. It is also being sent to a number of other discussion groups in the library and humanities communities. Paul Mangiafico, project assistant Center for Text & Technology pmangiafico@guvax.bitnet Academic Computer Center, 238 Reiss pmangiafico@guvax.georgetown.edu Georgetown University tel: 202-687-6096 Washington, DC 20057 USA fax: 202-687-6003 CPET DIGESTS NOW AVAILABLE VIA GOPHER AND FTP For the past four years, Georgetown University's Center for Text & Technology (CTT), under the aegis of the Academic Computer Center, has been compiling a directory of projects that create and analyze electronic text in the humanities. A relational database accessible via the Internet, Georgetown's Catalogue of Projects in Electronic Text (CPET) includes information on more than 350 projects throughout the world. Now digests of project information -- organized by humanities discipline and by language of the electronic text -- can be read, searched, and retrieved by means of the Internet's protocols for Gopher and anonymous FTP. There are digests for 40 different languages, as well as for linguistics, literature, philosophy, biblical studies, and a variety of others, ranging from Medieval and Renaissance studies to Archaeology, African studies, and Buddhism. GOPHER - INSTRUCTIONS FOR ACCESS The CPET digests are organized into subdirectories on Georgetown University's Gopher server. If you have never used Gopher, you may wish to consult your local Internet expert to determine whether you have access to Gopher client software or to obtain for instructions for using it. At many locations, one simply types the word GOPHER at the system prompt of the networked mainframe. Once inside the main Gopher directory, look for CPET files under: Other Gopher and Information Servers North America USA Washington, DC Georgetown University Please note that the menu item for Washington, DC, appears alphabetically after Washington state and not after Delaware. On the Georgetown server look into the directory CPET_PROJECTS_IN_ELECTRONIC_TEXT, where you will find the following files and subdirectories: 1. CPET_DIGESTS_INTRODUCTION.TXT (information on the digests) 2. CPET_INTRODUCTION.TXT (information on the CPET database) 3. CPET_USER_GUIDE.TXT (how to access the on-line database) 4. DIGESTS_DISCIPLINES.DIR (digests organized by discipline) 5. DIGESTS_LANGUAGES.DIR (digests organized by language) The filenames of the digests have as extensions the approximate size in kilobytes of each file; filesize will determine the length of time needed to acquire the file. Before retrieving any of the digests, please read the introductory file (CPET_DIGESTS_INTRODUCTION.TXT). FTP - INSTRUCTIONS FOR ACCESS The digests are arranged in a similar structure in Georgetown's FTP server. To survey the digests, first enter the following command from your system prompt: ftp guvax.georgetown.edu (or ftp 141.161.1.2) When requested, login with the username ANONYMOUS and a password according to the formula YOURNAME@YOURSITE. Once within GUVAX, at the ftp prompt ( often either ftp> or * ), change directories as follows: ftp> cd cpet_projects_in_electronic_text Then if you then enter a directory command -- DIR -- you will find the same files and subdirectories that are described in the preceding section of these directions on gopher. To inspect the other directories in a subdirectory, change directories again. Do not enter the .DIR extension or the version number, and distinguish between hyphens and underscores when typing the filenames. For example, at the prompt enter a command such as the following: ftp>cd digests_disciplines To explore further the directory structure and the file contents, enter the commands to show the directory (DIR) or to change the directory (CD) as often as necessary. Note: some subdirectories contain more than one complete screen of filenames, so when you enter a dir command, the initial contents of the subdirectory may scroll off the screen. To stop the scrolling, use whatever device your system permits. For example, with VAX VMS one would use CTRL-S (that is, hold down the CTRL key and press the S key) to stop scrolling and CTRL-Q to continue scrolling. To retrieve a file, type at the ftp prompt the command GET followed by the name of the file (with the filename extension) that you wish to retrieve. For example, ftp> get finnish.17K A system message will confirm that the file has been transferred to your computer (more specifically, to the directory from which you invoked ftp). To leave FTP, enter at the prompt the command BYE. ftp> bye If you have any questions or comments on this service, or would like to learn more about CPET and Georgetown's Center for Text and Technology, please contact us at the address below. Georgetown Catalogue of Projects in Electronic Text (CPET) Center for Text & Technology Academic Computer Center, Reiss 238 Georgetown University, Washington, DC 20057 USA tel: 202-687-6096 fax: 202-687-6003 Contacts: Paul Mangiafico, CPET Project Assistant pmangiafico@guvax.georgetown.edu Dr. Michael Neuman, Director, Center for Text & Technology neuman@guvax.georgetown.edu ========================================================================= Date: Wed, 17 Feb 1993 19:49:10 CST Reply-To: Lars Bruzelius Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Lars Bruzelius Subject: 8th European Xplor Conference Call for Papers The Eighth European Xplor Conference & Exhibit will take place in The Hague, June 7-10, 1993. Xplor International is a non-profit user/professional/trade organization for those involved with electronic document systems. The conference committee has always considered presentations on electronic document standards as an important part of the programme. Last year's conference included such speakers as Erik van Herwijnen, Johan van Wingen, and Dr Schlupp. For this year we have already scheduled a session on ISO 10175 (The "Palladium" architecture for distributed printing) with speakers from IBM and Bull. To this we would like to add at least one more session on relevant electronic document standards. Please send suggestions on topics and speakers as well as queries for more information on Xplor to the address below. We are trying to finalize the program before mid-March and any suitable suggestions will be served on a first-come - first-service bases. This Call for Papers is sent to several discussion-lists. Please excuse us for any cross postings. Lars Bruzelius Conference Chairman Uppsala University Computer Center Box 174, S-751 04 Uppsala, Sweden. Telephone: +46 18 18 77 31 Bitnet: uddri@seudac21 Telefax: +46 18 51 66 00 Internet: Lars_Bruzelius@udac.uu.se ========================================================================= Date: Thu, 18 Feb 1993 15:57:19 CST Reply-To: Robin Cover Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Robin Cover Subject: TEI tags for interpolations Do the P1 or current P2 fascicles offer guidance on the encoding of interpolations (additions, substitutions, deletions, clarifying expansions) where such interpolations require variable punctuation depending upon style manual rules? I'm not asking about how to mark up a text already in paper print (e.g., where a text uses ellipses, square-brackets or angle-brackets to punctuate interpolations), but how to represent interpolations I want to make (create) in quoted textual material. I can visualize how the , and tags might be used for such purpose in conjunction with , but the Guidelines don't seem to sanction this usage, unless I missed something in my hasty read. The , and tags seem bent in the direction of transcribing (print) text -- are they to be used inside of material to manage the variable processing of interpolations as well? Since style manuals vary greatly in their rules for punctuating interpolated text (brackets, varying numbers of ellipsis points, depending upon context) and for (not) representing original content versus interpolated content, it would seem useful to have standard conventions for representing quoted text that's been altered in various ways (substitute phraseology, corrected grammar, addition, deletion, clarification, expletive-sanitation) -- allowing style rules to determine proper processing in various paper or electronic hypertext reading environments. Comments? Help in P1/P2 I've overlooked? Robin Cover ------------------------------------------------------------------------- Robin Cover BITNET: zrcc1001@smuvm1 ("one-zero-zero-one") 6634 Sarah Drive Internet: robin@utafll.uta.edu ("uta-ef-el-el") Dallas, TX 75236 USA Internet: zrcc1001@vm.cis.smu.edu Tel: (1 214) 296-1783 Internet: robin@ling.uta.edu FAX: (1 214) 709-2433 Internet: robin@txsil.sil.org ========================================================================= ========================================================================= Date: Fri, 19 Feb 1993 13:58:47 CST Reply-To: D.Wujastyk@ucl.ac.uk Sender: "TEI-L: Text Encoding Initiative public discussion list" From: Dominik Wujastyk Subject: TEI and Author/Editor; Qwertz; TeX I've recently installed SoftQuad's Author/Editor SGML-aware editor. Has anyone got experience with using TEI DTDs with A/E? Similarly, there is a system called qwertz which includes DTDs for LaTeX styles. Has anyone used these with A/E? Finally, what is the current leading edge of technology :-) for translating SGML documents into TeX for printing? Dominik ---------------- Dominik Wujastyk d.wujastyk@ucl.ac.uk +44 71 611 8467 ========================================================================= Date: Tue, 23 Feb 1993 14:02:44 CST Reply-To: gobenaus@ux1.cso.uiuc.edu Sender: "TEI-L: Text Encoding Initiative public discussion list" Comments: Warning -- original Sender: tag was gobenaus@UX1.CSO.UIUC.EDU From: Gerhard Obenaus Subject: ISO Guidelines Hi there, I'm interested in obtaining an electronic copy of ISO/TC 37 WI 18, Coding of Bibilographic References in Terminology Work and Terminography and several other ISO documents. Does anybody know if they are available electronically and where. Is the only place to buy those documents ISO in Geneva, or are they available through a local distributor in the US? I have several documents, but nowhere does it say where to order them other than from Switzerland. Also, is there a way to gain access to working drafts, such as ISO WD 12 620, Dictionary data type elements? Thanks in advance for your suggestions. Gerhard Obenaus University of Illinois Internet: g-obenaus@uiuc.edu CompuServe: 71660,3545 ========================================================================= Date: Tue, 23 Feb 1993 19:20:19 CST Reply-To: andras@gatekeeper.calera.com Sender: "TEI-L: Text Encoding Initiative public discussion list" From: andras@gatekeeper.calera.com Subject: ISO standards Gerhard, recently I received the following suggestion from Carl Malamud (carl@malamud.com): >[...] ISO is being quiet silent about >their plans and I'm not expecting to see much from them. The ITU has >a few things on-line on their Teledoc system. Send mail to: > > teledoc@itu.arcom.ch > >and put "HELP" in the body of your message. It will send back >instructions. Be warned that the system is kind of slow and >doesn't have too much on it. I tried this and the help message did indeed return some instructions. However, the queries I produced based on these messages never returned any useful information. If you have more luck, please post the steps you followed. Andras Kornai (andras@calera.com) ========================================================================= Date: Sat, 27 Feb 1993 11:37:34 CST Reply-To: "CHRIS TIFFIN, ENGLISH" Sender: "TEI-L: Text Encoding Initiative public discussion list" From: "CHRIS TIFFIN, ENGLISH" Hello, I need to learn what is the current standard set of protocols for text designated for electronic storage. The project is a series of works of Australian literature which are being carefully edited, and we are looking forwards to electronic editions of them. I am reading SGML manuals; what else do I do? Any suggestions gratefully received. Many thanks Chris Tiffin, University of Queensland, Australia ========================================================================= Date: Sat, 27 Feb 1993 12:14:43 CST Reply-To: colin@cogsci.ed.ac.uk Sender: "TEI-L: Text Encoding Initiative public discussion list" From: colin@cogsci.ed.ac.uk Subject: SGML and bibliographies I would greatly appreciate some advice on this subject. I'm completely new to SGML, btw, so please treat me gently. Briefly, I'm working on a project which aims to assist copy editors in handling bibliographies - the input is raw text, so there's no assumption that anything like BibTex is around. We're parsing the texts, using a basic chart parser, into a fairly comprehensive representation of the contents, and after a bit of casting around, we've more or less decided that the output should be translated into SGML. I understand that work has been done on standardising bibliography DTDs, and so on, and that there has been, or is, or will be, a TEI project on the subject. Any information about that would be extremely welcome. (I've read the section on bibliographies in "Practical SGML" - I'm hoping that's not all the help I'm going to get.) The attraction of SGML, of course, is that we can blithely assume the presence of SGML-LaTeX convertors, and so on. However, one problem with this is that, for our purposes, editors have to be able to control certain things about the appearance of the text, such as the font that a title appears in, in order to capture a house style correctly. As I understand it, specifying such information using an SGML attribute is frowned upon as it goes against the spirit of describing the semantics rather than the syntax of text. If so, what should I do? I certainly don't want to be in the position of creating a mapping from SGML, via a publisher's house style sheet, into every possible document formatting device. Colin ========================================================================= Date: Sat, 27 Feb 1993 14:39:55 CST Reply-To: "Wendy Plotkin, TEI (312) 413-0331" Sender: "TEI-L: Text Encoding Initiative public discussion list" From: "Wendy Plotkin, TEI (312) 413-0331" Subject: Protocols for Electronic Storage of Texts Chris Tiffin of University of Queensland recently posted a note to TEI-L about protocols to use in preparing electronic editions of a series of works of Australian literature. Here at the TEI, we of course favor the recommendations developed in the last four years and included in TEI P2. To date, the following fascicles have been released: 1. Base Tag Set for Transcriptions of Spoken Texts (Chapter 10: TS) (23 April 1992) 2. Characters and Character Sets (Chapter 4: CH) (17 July 1992) 3. TEI Header (Chapter 5: HD) (19 August 1992) 4. Base Tag Set for Prose (Chapter 6: PR) (24 October 1992) 5. Formal SGML Grammar for TEI-Interchange (Chapter 42: GR) (4 December 1992) 6. Tags Available in All TEI DTDs (Chapter 7: CO) (10 December 1992) 7. Base Tag Set for Terminological Data (Chapter 13: TE) (23 December 1992) 8. Segmentation and Alignment (Chapter 16: SA) (26 January 1993) Additional fascicles will be announced on TEI-L as published.