Predicting Information Retrieval Performance

Predicting Information Retrieval Performance PDF Author: Robert M. Losee
Publisher: Springer
ISBN: 9783031011894
Category : Computers
Languages : en
Pages : 59

Book Description
Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.

Predicting Information Retrieval Performance

Predicting Information Retrieval Performance PDF Author: Robert M. Losee
Publisher: Springer Nature
ISBN: 303102317X
Category : Computers
Languages : en
Pages : 59

Book Description
Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.

Estimating the Query Difficulty for Information Retrieval

Estimating the Query Difficulty for Information Retrieval PDF Author: David Carmel
Publisher: Springer Nature
ISBN: 3031022726
Category : Computers
Languages : en
Pages : 77

Book Description
Many information retrieval (IR) systems suffer from a radical variance in performance when responding to users' queries. Even for systems that succeed very well on average, the quality of results returned for some of the queries is poor. Thus, it is desirable that IR systems will be able to identify "difficult" queries so they can be handled properly. Understanding why some queries are inherently more difficult than others is essential for IR, and a good answer to this important question will help search engines to reduce the variance in performance, hence better servicing their customer needs. Estimating the query difficulty is an attempt to quantify the quality of search results retrieved for a query from a given collection of documents. This book discusses the reasons that cause search engines to fail for some of the queries, and then reviews recent approaches for estimating query difficulty in the IR field. It then describes a common methodology for evaluating the prediction quality of those estimators, and experiments with some of the predictors applied by various IR methods over several TREC benchmarks. Finally, it discusses potential applications that can utilize query difficulty estimators by handling each query individually and selectively, based upon its estimated difficulty. Table of Contents: Introduction - The Robustness Problem of Information Retrieval / Basic Concepts / Query Performance Prediction Methods / Pre-Retrieval Prediction Methods / Post-Retrieval Prediction Methods / Combining Predictors / A General Model for Query Difficulty / Applications of Query Difficulty Estimation / Summary and Conclusions

Predicting Information Retrieval Performance

Predicting Information Retrieval Performance PDF Author: Robert M. Losee
Publisher: Morgan & Claypool
ISBN: 9781681734743
Category : Computers
Languages : en
Pages : 79

Book Description
Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.

Methods for Evaluating Interactive Information Retrieval Systems with Users

Methods for Evaluating Interactive Information Retrieval Systems with Users PDF Author: Diane Kelly
Publisher: Now Publishers Inc
ISBN: 1601982240
Category : Database management
Languages : en
Pages : 246

Book Description
Provides an overview and instruction on the evaluation of interactive information retrieval systems with users.

Advances in Information Retrieval

Advances in Information Retrieval PDF Author: Cathal Gurrin
Publisher: Springer
ISBN: 3642122752
Category : Computers
Languages : en
Pages : 677

Book Description
These proceedings contain the papers presented at ECIR 2010, the 32nd Eu- pean Conference on Information Retrieval. The conference was organizedby the Knowledge Media Institute (KMi), the Open University, in co-operation with Dublin City University and the University of Essex, and was supported by the Information Retrieval Specialist Group of the British Computer Society (BCS- IRSG) and the Special Interest Group on Information Retrieval (ACM SIGIR). It was held during March 28-31, 2010 in Milton Keynes, UK. ECIR 2010 received a total of 202 full-paper submissions from Continental Europe (40%), UK (14%), North and South America (15%), Asia and Australia (28%), Middle East and Africa (3%). All submitted papers were reviewed by at leastthreemembersoftheinternationalProgramCommittee.Outofthe202- pers 44 were selected asfull researchpapers. ECIR has alwaysbeen a conference with a strong student focus. To allow as much interaction between delegates as possible and to keep in the spirit of the conference we decided to run ECIR 2010 as a single-track event. As a result we decided to have two presentation formats for full papers. Some of them were presented orally, the others in poster format. The presentation format does not represent any di?erence in quality. Instead, the presentation format was decided after the full papers had been accepted at the Program Committee meeting held at the University of Essex. The views of the reviewers were then taken into consideration to select the most appropriate presentation format for each paper.

Advances in Information Retrieval

Advances in Information Retrieval PDF Author: Leif Azzopardi
Publisher: Springer
ISBN: 3030157199
Category : Computers
Languages : en
Pages : 439

Book Description
This two-volume set LNCS 11437 and 11438 constitutes the refereed proceedings of the 41st European Conference on IR Research, ECIR 2019, held in Cologne, Germany, in April 2019. The 48 full papers presented together with 2 keynote papers, 44 short papers, 8 demonstration papers, 8 invited CLEF papers, 11 doctoral consortium papers, 4 workshop papers, and 4 tutorials were carefully reviewed and selected from 365 submissions. They were organized in topical sections named: Modeling Relations; Classification and Search; Recommender Systems; Graphs; Query Analytics; Representation; Reproducibility (Systems); Reproducibility (Application); Neural IR; Cross Lingual IR; QA and Conversational Search; Topic Modeling; Metrics; Image IR; Short Papers; Demonstration Papers; CLEF Organizers Lab Track; Doctoral Consortium Papers; Workshops; and Tutorials.

String Processing and Information Retrieval

String Processing and Information Retrieval PDF Author: Edgar Chavez
Publisher: Springer Science & Business Media
ISBN: 3642163203
Category : Computers
Languages : en
Pages : 421

Book Description
Thisvolumecontainsthe paperspresentedatthe 17thInternationalSymposium on String Processing and Information Retrieval (SPIRE 2010), held October 11-13, 2010 in Los Cabos, Mexico. The annual SPIRE conference provides researchers within ?elds related to string processing and/or information retrieval a possibility to present their or- inal contributions and to meet and talk with other researchers with similar - terests. The call for papers invited submissions related to string processing (d- tionary algorithms; text searching; pattern matching; text and sequence c- pression; automata-based string processing), information retrieval (information retrieval models; indexing; ranking and ?ltering; querying and interface design), natural language processing (text analysis; text mining; machine learning; - formation extraction; language models; knowledge representation), searchapp- cations and usage (cross-lingual information access systems; multimedia inf- mation access; digital libraries; collaborative retrieval and Web-related appli- tions; semi-structured data retrieval; evaluation), and interaction of biology and computation (DNA sequencing and applications in molecular biology; evolution andphylogenetics;recognitionofgenesandregulatoryelements;sequencedriven protein structure prediction). The papers presented at the symposium were selected from 109 submissions written by authors from 30 di'erent countries. Each submission was reviewed by at least three reviewers, with a maximum of ?ve reviews for particularly challengingpapers. The ProgramCommittee accepted 39 papers(corresponding to ?35% acceptance rate): 26 long papers and 13 short papers. In addition to these presentations, SPIRE 2010 also featured invited talks by Gonzalo Navarro (Universidad de Chile) and Mark Najork (Microsoft Research, USA).

Simulating Information Retrieval Test Collections

Simulating Information Retrieval Test Collections PDF Author: David Hawking
Publisher: Springer Nature
ISBN: 3031023234
Category : Computers
Languages : en
Pages : 162

Book Description
Simulated test collections may find application in situations where real datasets cannot easily be accessed due to confidentiality concerns or practical inconvenience. They can potentially support Information Retrieval (IR) experimentation, tuning, validation, performance prediction, and hardware sizing. Naturally, the accuracy and usefulness of results obtained from a simulation depend upon the fidelity and generality of the models which underpin it. The fidelity of emulation of a real corpus is likely to be limited by the requirement that confidential information in the real corpus should not be able to be extracted from the emulated version. We present a range of methods exploring trade-offs between emulation fidelity and degree of preservation of privacy. We present three different simple types of text generator which work at a micro level: Markov models, neural net models, and substitution ciphers. We also describe macro level methods where we can engineer macro properties of a corpus, giving a range of models for each of the salient properties: document length distribution, word frequency distribution (for independent and non-independent cases), word length and textual representation, and corpus growth. We present results of emulating existing corpora and for scaling up corpora by two orders of magnitude. We show that simulated collections generated with relatively simple methods are suitable for some purposes and can be generated very quickly. Indeed it may sometimes be feasible to embed a simple lightweight corpus generator into an indexer for the purpose of efficiency studies. Naturally, a corpus of artificial text cannot support IR experimentation in the absence of a set of compatible queries. We discuss and experiment with published methods for query generation and query log emulation. We present a proof-of-the-pudding study in which we observe the predictive accuracy of efficiency and effectiveness results obtained on emulated versions of TREC corpora. The study includes three open-source retrieval systems and several TREC datasets. There is a trade-off between confidentiality and prediction accuracy and there are interesting interactions between retrieval systems and datasets. Our tentative conclusion is that there are emulation methods which achieve useful prediction accuracy while providing a level of confidentiality adequate for many applications. Many of the methods described here have been implemented in the open source project SynthaCorpus, accessible at: https://bitbucket.org/davidhawking/synthacorpus/

Advances in Information Retrieval Theory

Advances in Information Retrieval Theory PDF Author: Giambattista Amati
Publisher: Springer
ISBN: 364223318X
Category : Computers
Languages : en
Pages : 346

Book Description
This book constitutes the refereed proceedings of the Third International Conference on the Theory of Information Retrieval, ICTIR 2011, held in Bertinoro, Italy, in September 2011. The 25 revised full papers and 13 short papers presented together with the abstracts of two invited talks were carefully reviewed and selected from 65 submissions. The papers cover topics ranging from query expansion, co-occurence analysis, user and interactive modelling, system performance prediction and comparison, and probabilistic approaches for ranking and modelling IR to topics related to interdisciplinary approaches or applications. They are organized into the following topical sections: predicting query performance; latent semantic analysis and word co-occurrence analysis; query expansion and re-ranking; comparison of information retrieval systems and approximate search; probability ranking principle and alternatives; interdisciplinary approaches; user and relevance; result diversification and query disambiguation; and logical operators and descriptive approaches.