Research
Current and Completed Research Projects
Large-Scale Querying of Semantic (RDF) Data
The proliferation of the semantic web in the form of Resource Description Framework (RDF) demands an efficient, scalable, and distributed storage along with a highly available and fault-tolerant parallel processing strategy. More precisely, the rapid growth of RDF data raises the need for an efficient partitioning strategy over distributed data management systems to improve SPARQL query performance regardless of its pattern shape with minimized pre-processing time. In this context, we are investigating new relational partitioning schemes for RDF data, that further partitions existing Property Table into multiple tables based on distinct properties (comprising of all subjects with non-null values for those distinct properties) in order to minimize input data and join operations. Evaluation of these techniques involves extensive experimental evaluation with respect to preprocessing costs and query performance, using various benchmark datasets such as Lehigh University Benchmark (LUBM) and Waterloo SPARQL Diversity Test Suite (WatDiv) datasets with varying number of triples (i.e, large-scale datasets in the order of billions of triples).
Recent Publications:
The proliferation of the semantic web in the form of Resource Description Framework (RDF) demands an efficient, scalable, and distributed storage along with a highly available and fault-tolerant parallel processing strategy. More precisely, the rapid growth of RDF data raises the need for an efficient partitioning strategy over distributed data management systems to improve SPARQL query performance regardless of its pattern shape with minimized pre-processing time. In this context, we are investigating new relational partitioning schemes for RDF data, that further partitions existing Property Table into multiple tables based on distinct properties (comprising of all subjects with non-null values for those distinct properties) in order to minimize input data and join operations. Evaluation of these techniques involves extensive experimental evaluation with respect to preprocessing costs and query performance, using various benchmark datasets such as Lehigh University Benchmark (LUBM) and Waterloo SPARQL Diversity Test Suite (WatDiv) datasets with varying number of triples (i.e, large-scale datasets in the order of billions of triples).
Recent Publications:
- M. Hassan*, S. K. Bansal. "S3QLRDF: Property Table Partitioning Scheme for Distributed SPARQL Querying of large-scale RDF data", in proceedings of IEEE International Conference on Smart Data Services (SMDS), Beijing, China 2020.
- M. Hassan*, S. K. Bansal. "Data Partitioning Scheme for Efficient Distributed RDF Querying Using Apache Spark", in proceedings of IEEE Intl. Conference on Semantic Computing (ICSC), Newport Beach, CA 2019 [28% acceptance rate].
- M. Hassan*, S. K. Bansal. "Semantic Data Querying over NoSQL Databases with Apache Spark", in Proceedings of IEEE International Conference on Information Reuse and Integration (IRI), pp. 364-371, Salt Lake City, Utah 2018.
- M. Hassan*, S. K. Bansal. "RDF Data Storage techniques for efficient SPARQL Query Processing using Distributed Computation Engines", in Proceedings of IEEE International Conference on Information Reuse and Integration (IRI), pp. 323-330, Salt Lake City, Utah 2018.
- M. Mammo*, M. Hassan*, S. K. Bansal. "Distributed SPARQL Querying over Big RDF Data using Presto-RDF", International Journal of Big Data (IJBD), 2(3), 2015, pp. 34-49.
Ontology Alignment and Matching:
Information or data sharing among different systems is very limited due to the heterogeneous nature of data in syntaxes, structures and semantics. An ontology is a formal description of knowledge as a set of concepts and relationship between them which represents a specific domain. Ontology matching is a process of finding semantic relationship between two or more different ontological entities. Ontology matching is an integral part of creating Linked Data which is a process of publishing and linking structured data on the web. It is used to overcome the limitation of semantic interoperability between current vast distributed systems available on the internet. Despite the advancement in Linked Data, the ontology matching is still mostly done manually by domain experts which is labor intensive and error prone. Ontology matching or linking between source and target ontology is generally done on two different levels. The first one is schema level and the second one is instance level. Schema level matching is mapping/alignment between concepts/classes of the source and target ontology and on the other hand, instance level linking is mapping/alignment between instances/individuals of the source and target ontology. There is another type of matching which is mixed of schema level and instance level of linking. We investigate schema level linking of ontologies and evaluate our approach using datasets published in Ontology Alignment Evaluation Initiative (OAEI).
Recent Publications:
Information or data sharing among different systems is very limited due to the heterogeneous nature of data in syntaxes, structures and semantics. An ontology is a formal description of knowledge as a set of concepts and relationship between them which represents a specific domain. Ontology matching is a process of finding semantic relationship between two or more different ontological entities. Ontology matching is an integral part of creating Linked Data which is a process of publishing and linking structured data on the web. It is used to overcome the limitation of semantic interoperability between current vast distributed systems available on the internet. Despite the advancement in Linked Data, the ontology matching is still mostly done manually by domain experts which is labor intensive and error prone. Ontology matching or linking between source and target ontology is generally done on two different levels. The first one is schema level and the second one is instance level. Schema level matching is mapping/alignment between concepts/classes of the source and target ontology and on the other hand, instance level linking is mapping/alignment between instances/individuals of the source and target ontology. There is another type of matching which is mixed of schema level and instance level of linking. We investigate schema level linking of ontologies and evaluate our approach using datasets published in Ontology Alignment Evaluation Initiative (OAEI).
Recent Publications:
- J. Chakraborty, B. Yaman, L. Virgili, K. Konar, S. K. Bansal. "OntoConnect: Unsupervised Ontology Alignment with Recursive Neural Network", in proceedings of the ACM/SIGAPP Symposium on Applied Computing (SAC) - Semantic Web and Applications Track, 2021.
- J. Chakraborty, B. Yaman, L. Virgili, K. Konar, S. K. Bansal. "OntoConnect: Unsupervised Ontology Alignment with Recursive Neural Network", in proceedings of the 15th International Workshop on Ontology Matching colocated with the 19th International Conference on Semantic Web Conference (ISWC), 2020.
Semantic ETL framework for Big Data Integration:
Big Data researchers are dealing with the Variety of data that includes various formats such as structured, numeric, unstructured text data, email, video, and audio. The proposed Semantic Extract-Transform-Load (ETL) framework that uses semantic technologies to integrate and publish data from multiple sources as open linked data provides an extensible solution for effective data integration, facilitating the creation of smart urban apps for smarter living.
Recent Publications:
Big Data researchers are dealing with the Variety of data that includes various formats such as structured, numeric, unstructured text data, email, video, and audio. The proposed Semantic Extract-Transform-Load (ETL) framework that uses semantic technologies to integrate and publish data from multiple sources as open linked data provides an extensible solution for effective data integration, facilitating the creation of smart urban apps for smarter living.
Recent Publications:
- C. Dhekne*, S. K. Bansal. "MOOClink: An Aggregator for MOOC Offerings fromVarious Providers". In Journal of Engineering Education Transformations (JEET), [S.I.], Jan. 2018. ISSN 2394-1707.
- J. Chakraborty*, G. Thopugunta*, S. K. Bansal. "Data Extraction and Integration for Scholar Recommendation System", in Proceedings of Workshop on Semantic Data Integration at IEEE International Conference on Semantic Computing, pp. 397-402, Laguna Hills, CA, 2018.
- V. Johri*, S. K. Bansal. "Identifying trends in technologies and programming languages using Topic Modeling", in Proceedings of Semantic Data Integration Workshop at IEEE International Conference on Semantic Computing, pp. 391-396, Laguna Hills, CA, 2018.
- Y. Pandey*, S. K. Bansal. "A Semantic Safety Check System for Emergency Management". In Open Journal of Semantic Web (OJSW), 4(1), 35-50, 2017.
- J. Chakraborty*, A. Padki*, S. K. Bansal. "Semantic ETL - State-of-the-art and open research challenges", in Proceedings of Workshop on Semantic Data Integration (SDI) at IEEE International Conference on Semantic Computing (ICSC), January 2017, San Diego, CA USA.
- S. K. Bansal, S. Kagemann*. "Integrating Big Data: A Semantic Extract-Transform-Load Framework”. In IEEE Computer, vol.48, no. 3, pp. 42-50, Mar. 2015.
- S. Bansal. "Towards a Semantic Extract-Transform-Load (ETL) framework for Big Data Integration". In Proceedings of IEEE International Congress on Big Data (BIGDATA), pp. 522-529, June 2014, Anchorage, USA.
Question Answering over Linked Data
Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need
to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. We investigate approaches to automatically learn and classify natural language
questions into corresponding templates using recursive neural networks. We evaluate our system using benchmark datasets such as LC-QuAD and QALD challenge datasets.
Recent Publications:
Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need
to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. We investigate approaches to automatically learn and classify natural language
questions into corresponding templates using recursive neural networks. We evaluate our system using benchmark datasets such as LC-QuAD and QALD challenge datasets.
Recent Publications:
- R. G. Athreya*, S. K. Bansal, A-C. N.Ngomo, and R. Usbeck. "Template-based Question Answering using Recursive Neural Networks", arXiv 2020, arXiv:2004.13843.
- S. Tiwari*, B. Goel*, S. K. Bansal. "Mold: A Framework for Entity Extraction and Summarization", in proceedings of Semantic Data Integration (SDI) Workshop at 14th IEEE Intl. Conference on Semantic Computing (ICSC), San Diego, CA, 2020.
CircuitTutor (Co-PI)
Project Website: http://www.circuittutor.com
Description: Circuit Tutor is a step-based computer-aided tutoring system to aid in the teaching of elementary linear circuit analysis. This project involves a unique approach to computer-aided instruction where the computer both creates and solves its own circuit problems on the fly using a sophisticated three-step algorithm. Each student can access an unlimited source of fully worked and explained examples at various levels of gradually increasing difficulty, as well as exercises designed to be completely isomorphic to those examples, but completely different both in circuit topology and element values. No two students work the same problems, eliminating situations where one student merely copies the work of another or that in a solution manual, since none exists. Both problems and solutions are error-free, eliminating the frequent frustration students experience when working conventional textbook problems and examples, where errors are notoriously common. Moreover, exercises are presented in a game-like format to significantly improve student motivation and engagement; students even frequently report that they are 'fun'! A variety of special pedagogical features such as color-coding of nodes is used to enhance student learning.
External Funding: Improving Undergraduate STEM Education: Education and Human Resources (IUSE: EHR) Program of the National Science Foundation under Grant No. 1821628 August 2018 - July 2022.
Recent Publications:
Project Website: http://www.circuittutor.com
Description: Circuit Tutor is a step-based computer-aided tutoring system to aid in the teaching of elementary linear circuit analysis. This project involves a unique approach to computer-aided instruction where the computer both creates and solves its own circuit problems on the fly using a sophisticated three-step algorithm. Each student can access an unlimited source of fully worked and explained examples at various levels of gradually increasing difficulty, as well as exercises designed to be completely isomorphic to those examples, but completely different both in circuit topology and element values. No two students work the same problems, eliminating situations where one student merely copies the work of another or that in a solution manual, since none exists. Both problems and solutions are error-free, eliminating the frequent frustration students experience when working conventional textbook problems and examples, where errors are notoriously common. Moreover, exercises are presented in a game-like format to significantly improve student motivation and engagement; students even frequently report that they are 'fun'! A variety of special pedagogical features such as color-coding of nodes is used to enhance student learning.
External Funding: Improving Undergraduate STEM Education: Education and Human Resources (IUSE: EHR) Program of the National Science Foundation under Grant No. 1821628 August 2018 - July 2022.
Recent Publications:
- B.J. Skromme, C. Redshaw, M.A. Gupta, M. S. Gupta, P. Andrei, H. Erives, A. Bailey, W. Thompson, S. Bansal. "Interactive Editing of Circuits in a Step-Based Tutoring System", In American Society for Engineering Education Annual Conference, July 2020.
- B. Skromme, S. K. Bansal, W. Barnard, M. White. "Step-based Tutoring Software for Complex procedures in Circuit Analysis", in proceedings of IEEE Frontiers in Education (FIE), pp. 1-5, Cincinnati, USA, 2019.
Instructional Module Development System (IMODS)
Project Website: http://imod.poly.asu.edu
Description: Design and development of a framework for outcome-based course design process and its translation into a Semantic Web-based software tool that can guide STEM educators through the complex task of curriculum development and provide relevant information about research-based pedagogical and assessment principles.
Area of Study: Semantic Computing, Engineering Education, Ontology Engineering, User-Centered Design
External Funding: National Science Foundation's Transforming Undergraduate Education in Science, Technology, Engineering and Mathematics (TUES) program Award No. DUE-1246139; July 2013 - June 2016.
Recent Publications:
Project Website: http://imod.poly.asu.edu
Description: Design and development of a framework for outcome-based course design process and its translation into a Semantic Web-based software tool that can guide STEM educators through the complex task of curriculum development and provide relevant information about research-based pedagogical and assessment principles.
Area of Study: Semantic Computing, Engineering Education, Ontology Engineering, User-Centered Design
External Funding: National Science Foundation's Transforming Undergraduate Education in Science, Technology, Engineering and Mathematics (TUES) program Award No. DUE-1246139; July 2013 - June 2016.
Recent Publications:
- S. K. Bansal, J. M.-Alexander. "Building a Repository of Instructional and Assessment Techniques for Instructional Module Development System", in Proceedings of Research in Engineering Education Symposium (REES), pp. 771-780, Cape Town, South Africa, 2019.
- V. Raj*, P. Goulet*, S. K. Bansal. "Evaluation of Instructional Module Development System", in proceedings of IEEE Frontiers of Education Conference (FIE), pp. 1-9, San Jose CA, 2018.
- S. K. Bansal, O. Dalrymple. "Repository of Instructional and Assessment Techniques for OBE-based Instructional Module Development System". In Journal of Engineering Education Transformations (JEET) Volume 29, Issue 3, 2016, pp. 93-100.
- S. K. Bansal, A. Bansal, O. Dalrymple. "Outcome-based Education Model for Computer Science Education". In Journal of Engineering Education Transformations (JEET) Volume 28, Issue 2&3, 2015, pp. 113-121.
- O. Dalrymple, S. Bansal, A. Gaffar, R. Taylor*. "Instructional Module Development (IMOD) System: A User Study on Curriculum Design Process". In Proceedings of IEEE Frontiers in Education Conference (FIE), October 2014, Madrid, Spain.
- O. Dalrymple, S. Bansal, A. Gaffar. "User Research for the Instructional Module Development (IMOD) System". In Proceedings of American Society for Engineering Education Conference (ASEE) - NSF Grantees session, June 2014, Indianapolis, USA.
- O. Dalrymple, S. Bansal, K. Elamparithi*, Husna Gafoor*, Adam Lay*, Saishruti Shetty*. "Instructional Module Development (IMOD) System: Building Faculty Expertise in Outcome-based Course design". In Proceedings of IEEE Frontiers in Education Conference (FIE), October 2013, Oklahoma City, USA.
Collaborative Research Experience For Undergraduates (CREU)
External Funding: CREU project is funded by CRA-W and the Coalition to Diversify Computing (CDC).
Recent Publications:
- Representing Linked Data to Discover Knowledge Patterns for Neighborhood Sustainability Rating
- Project Website: http://www.public.asu.edu/~skbansa2/creu2017/
- CREU Scholars: Cecilia LaPlace, Julia Schmidt, Vatricia Edgar
- Description: This paper specifically focuses on extraction, integration, and querying of open data available about environmental sustainability. The global trend toward urbanization has created a need for residents of urban neighborhoods to better understand the factors impacting the social, environmental, and economic sustainability of an area. To date, there is no concise representation of all aspects of sustainability. This paper aims to fill this gap. A version of sustainability resting on economic, societal, and environmental development as the three main indicators was chosen to inform an ontology called SustainOnt used to organize and analyze relevant data from various sources.
- Exploring the Use of Semantic Technologies for Big Data Integration
- Project Website: http://www.public.asu.edu/~skbansa2/creu2014
- CREU Scholars: Kristel Licata, Rebecca Little, Ashley Mannon
- Description: The CREU project is specifically designed to explore the use of semantic technologies to connect, link, and load data into a data warehouse. The specific objectives include: (i) creation of a semantic data model via ontologies to provide a basis for integration and understanding knowledge from multiple sources; (ii) creation of integrated semantic data using Resource Description Framework (RDF); (iii) extracting useful knowledge and information from the combined web of data using SPARQL. The experiments will be conducted using a few sample public datasets that provide city parks and recreation area information, street maps, and geographic data.
External Funding: CREU project is funded by CRA-W and the Coalition to Diversify Computing (CDC).
Recent Publications:
- V. Edgar*, C. L.Place*, Julia Schmidt*, A. Bansal, and S. K. Bansal. "SustainOnt: an ontology for defining an index of neighborhood sustainability across domains", in Proceedings of The International Workshop on Semantic Big Data (SBD 20) @ SIGMOD 2020. ACM, New York, NY, USA, Article 9, 16.
Distributed Research Experience For Undergraduates (DREU) - MOOCLink: Building and utilizing linked data from Massive open online courses
Publications:
- Project Website: https://parasol.tamu.edu/dreu2014/Kagemann/
- DREU Scholar: Sebastian Kagemann
- Description: Linked Data is an emerging trend on the web with top companies such as Google, Yahoo and Microsoft promoting their own means of marking up data semantically. Despite the increasing prevalence of Linked Data, there are a limited number of applications that implement and take advantage of its capabilities, particularly in the domain of education. We present a project, MOOCLink, which aggregates online courses as Linked Data and utilizes that data in a web application to discover and compare open courseware.
Publications:
- S. K. Bansal, S. Kagemann*. "Integrating Big Data: A Semantic Extract-Transform-Load Framework”. In IEEE Computer, vol.48, no. 3, pp. 42-50, Mar. 2015.
- S. Kagemann*, S. K. Bansal. "MOOCLink: Building and utilizing linked data from Massive Open Online Courses" IEEE International Conference on Semantic Computing (ICSC), pp.373,380, 7-9 Feb. 2015. (<30% acceptance rate)
Universal Service-Semantics Description Language (USDL):
In order to effectively reuse existing services, we need an infrastructure that allows users and applications to discover, deploy, compose, and synthesize services automatically. This automation can take place only if a formal description of the Web services is available. USDL is a language for formally describing the semantics of Web services. USDL is based on the Web Ontology Language (OWL) and employs WordNet as a common basis for understanding the meaning of services. USDL can be regarded as formal service documentation that will allow sophisticated conceptual modeling and searching of available Web services, automated service composition, and other forms of automated service integration. This work won the best paper award at IEEE European Conference on Web Services (ECOWS ) 2005.
Publications:
In order to effectively reuse existing services, we need an infrastructure that allows users and applications to discover, deploy, compose, and synthesize services automatically. This automation can take place only if a formal description of the Web services is available. USDL is a language for formally describing the semantics of Web services. USDL is based on the Web Ontology Language (OWL) and employs WordNet as a common basis for understanding the meaning of services. USDL can be regarded as formal service documentation that will allow sophisticated conceptual modeling and searching of available Web services, automated service composition, and other forms of automated service integration. This work won the best paper award at IEEE European Conference on Web Services (ECOWS ) 2005.
Publications:
- S Kona, A Bansal, L Simon, A Mallya, G Gupta, and T Hite. "USDL: A Service-Semantics Description Language for Automatic Service Discovery and Composition". In International Journal of Web Services Research (IJWSR), Volume 6, No. 1, January-March 2009, pp. 20-48.
- A. Bansal, S. Kona, L. Simon, A. Mallya, G. Gupta, T. Hite. "Universal Service-Semantics Description Language". In Proceedings of IEEE European Conference On Web Services (ECOWS), November 2005, Vaxjo, Sweden. (Received Best Paper Award)
Web service Discovery and Composition:
We need infrastructure to discover and compose Web services. This project involves the design and development of a generalized semantics-based technique for automatic service composition that combines the rigor of process-oriented composition with the descriptiveness of semantics. Our generalized approach introduces the use of a conditional directed acyclic graph (DAG) where complex interactions, containing control flow, information flow, and pre/post conditions are effectively represented. Composition solution obtained is represented semantically as OWL-S documents. Web service composition will gain wider acceptance only when users know that the solutions obtained are comprised of trust-worthy services. The Web service Discovery and Composition Engine based on semantic description of Web services was used at WS-Challenge 2006 in San Francisco, California and WS-Challenge 2007 in Tokyo, Japan.
Publications:
We need infrastructure to discover and compose Web services. This project involves the design and development of a generalized semantics-based technique for automatic service composition that combines the rigor of process-oriented composition with the descriptiveness of semantics. Our generalized approach introduces the use of a conditional directed acyclic graph (DAG) where complex interactions, containing control flow, information flow, and pre/post conditions are effectively represented. Composition solution obtained is represented semantically as OWL-S documents. Web service composition will gain wider acceptance only when users know that the solutions obtained are comprised of trust-worthy services. The Web service Discovery and Composition Engine based on semantic description of Web services was used at WS-Challenge 2006 in San Francisco, California and WS-Challenge 2007 in Tokyo, Japan.
Publications:
- S. Bansal, A. Bansal, G. Gupta, M. B. Blake. "Generalized semantic Web service composition", In Springer Journal of Service Oriented Computing and Applications (SOCA), pp. 1-23, November 2014.
- M. B. Blake, D. J. Cummings*, A. Bansal, and S. Kona. "Workflow Composition of Service Level Agreements and Web Services". In Journal of Decision Support Systems (DSS), Volume 53, Issue 1, April 2012, pp. 234-244.
- S. Bansal, A. Bansal. "Reputation-based Web service selection for Composition". In Proceedings of the World Congress on Services (SERVICES), July, 2011, Washington, DC.
- S. Bansal, A. Bansal. "Effective Web Service Selection for Composition using Centrality Measures". In Proceedings of International Conference on Information Society (i-Society), July 2011, London UK.
- S. Bansal, A. Bansal, M. B. Blake. "Trust based Dynamic Web Service Composition using Social Network Analysis". In Proceedings of Business Applications of Social Network Analysis Workshop (BASNA), December 2010 at IEEE 4th International Conference on Internet Multimedia Systems Architecture and Application (IMSAA), Bangalore, India (20% Acceptance Rate).
- S. Kona, A. Bansal, M. B. Blake, G. Gupta. "Weaving Functional and Non-Functional Attributes for Dynamic Web Service Composition". In Proceedings of 22nd Intl. Conference on Software Engineering and Knowledge Engineering (SEKE), July 2010, San Francisco, CA (33% Acceptance Rate).
- S. Kona, A. Bansal, G. Gupta, M. B. Blake. "Generalized Semantics-based Service Composition". In Proceedings of IEEE International Conference on Web Services (ICWS), September 2008, Beijing, China (16% Acceptance Rate). [PDF]
- S. Kona, A. Bansal, G. Gupta, M. B. Blake. "Towards a General Framework for Web Service Composition". In Proceedings of IEEE International Conference on Services Computing (SCC), July 2008. [PDF]
- A. Bansal, M. B. Blake, S. Kona, M. Jaeger, et al. "WSC-08: The Web Services Challenge". in Proceedings of IEEE Intl. Conference on E-Technology, E-Commerce and E-Services (CEC/EEE), July 2008, Crystal City, VA.
- A. Bansal, S. Kona, M. B. Blake, G. Gupta; "An Agent-based Approach for Composition of Semantic Web Services". In Proceedings of 17th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises (ACEC) at WETICE, June 2008, Rome, Italy. [PDF]
- S. Kona, A. Bansal, G. Gupta. "Automatic Composition of Semantic Web Services". In Proceedings of IEEE International Conference on Web Services (ICWS), July 2007, Salt Lake City, Utah (18% Acceptance Rate). [PDF]
- S. Kona, A. Bansal, G. Gupta, T. Hite. "Semantics-based Web Service Composition engine" (Short paper). In Proceedings of IEEE Conf. on E-Commerce Technology and Conf. on Enterprise Computing, E-Commerce and E-Service (CEC/EEE), July 2007, Tokyo, Japan. [PDF]
- S. Kona, A. Bansal, G. Gupta, T. Hite. "Efficient Web Service Discovery and Composition using Constraint Logic Programming". In Proceedings of Intl. Workshop on Applied Logic Programming Semantic Web and Services (ALPSWS) at FLoC, August 2006, Seattle,WA. [PDF]
- S. Kona, A. Bansal, G. Gupta, T. Hite. "Web Service Discovery and Composition using USDL" (Short paper). In Proceedings of IEEE Conf. on E-Commerce Technology and Conf. on Enterprise Computing, E-Commerce and E-Service (CEC/EEE), June, 2006, San Francisco, CA. [PDF]