Automated Legal Question Answering Competition (ALQAC)
Run in association with the International Conference on Knowledge and Systems Engineering

ALQAC-2021 CALL FOR TASK PARTICIPATION
ALQAC-2021 Workshop: November 10-12, 2021
ALQAC-2021 Registration due: July 15, 2021

Sponsored by
Japan Advanced Institute of Science and Technology (JAIST)
University of Engineering and Technology (VNU-UET)

Overview

As an associated event of KSE 2021, we are happy to announce the 1st Automated Legal Question Answering Competition (ALQAC 2021). ALQAC includes 3 tasks for each language: (1) Legal Document Retrieval, (2) Legal Textual Entailment, and (3) Legal Question Answering. For the competition, we introduce the Legal Question Answering dataset – a manually annotated dataset based on well-known statute laws in Vietnamese and Thai Language. Through the competition, we aim to develop a research community on legal support systems.

Dataset

The dataset file formats are shown via examples as follows.

  • Legal Articles: Details about each article are in the following format:
[
    {
      "id": "45/2019/QH14",
      "articles": [
            {
                "text": "The content of legal article",
                "id": "1"
            }
        ]
    }
]
  • Annotation Samples: Details about each sample are in the following format:
[
    {
        "question_id": "q-1",
        "text": "The content of question or statement",
        "label": true,
        "relevant_articles": [
            {
                "law_id": "45/2019/QH14",
                "article_id": "1"
            }
        ]
    }
]

Tasks

Tasks Description

Task 1: Legal Document Retrieval

Task 1’s goal is to return articles that are related to a statement. An article is considered “relevant” to a statement iff the statement rightness can be entailed (as Yes/No) by the article. This task requires the retrieval of all the articles that are relevant to a query.

Specifically, the input samples consists of:

  • Legal Articles: whose format is the same as Legal Articles described in the Dataset section.
  • Question: whose format is in JSON as follows:
[
    {
        "question_id": "q-1",
        "text": "The content of question or statement"
    }
]

The system should retrieve all the relevant articles as follows:

[
    {
        "question_id": "q-1",
        "text": "The content of question or statement",
        "relevant_articles": [
            {
                "law_id": "45/2019/QH14",
                "article_id": "1"
            }
        ]
    }
]

Note that “relevant_articles” are the list of all relevant articles of the questions/statements.

The evaluation methods are precision, recall and F2-measure as follows:

Precisioni =    the number of correctly retrieved articles of query ith
                              the number of retrieved articles of query ith ,

Recalli =       the number of correctly retrieved cases(paragraphs) of query ith
                                 the number of relevant cases(paragraphs) of query ith ,

F2i=    (5 x Precisioni x Recalli)
                  (4 Precisioni + Recalli)

F2 = average of (F2i)

In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results.

In ALQAC 2021, the method used to calculate the final evaluation score of all queries is macro-average (evaluation measure is calculated for each query and their average is used as the final evaluation measure) instead of micro-average (evaluation measure is calculated using results of all queries).

Task 2: Legal Textual Entailment

Task 2’s goal is to construct Yes/No question answering systems for legal queries, by entailment from the relevant articles. Based on the content of legal articles, the system should answer whether the statement is true or false.

Specifically, the input samples consist of the pair of question/statement and relevant articles (>= 1) as follows:

[
    {
        "question_id": "q-1",
        "text": "The content of question or statement",
        "relevant_articles": [
            {
                "law_id": "45/2019/QH14",
                "article_id": "1"
            }
        ]
    }
]

The system should answer whether the question/statement is true or false via “label” in JSON format as follows:

[
    {
        "question_id": "q-1",
        "text": "The content of question or statement",
        "label": true,
        "relevant_articles": [
            {
                "law_id": "45/2019/QH14",
                "article_id": "1"
            }
        ]
    }
]

The evaluation measure will be accuracy, with respect to whether the yes/no question was correctly confirmed:

Accuracy = (the number of queries which were correctly confirmed as true or false)
(the number of all queries)

In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results.

Task 3: Legal Question Answering

Task 3’s goal is to construct Yes/No question answering systems for legal queries.

Given a legal statement or legal question, the task is to answer “Yes” or “No”, in other words, to determine whether it is true or false. This question answering could be a concatenation of Task 1 and Task 2, but not necessarily so, e.g. using any knowledge source other than the results of Task 2.

Specifically, the input samples consist of the question/statement as follows:

[
    {
        "question_id": "q-1",
        "text": "The content of question or statement"
    }
]

The system should answer whether the question/statement is true or false via “label” in JSON format as follows:

[
    {
        "question_id": "q-1",
        "text": "The content of question or statement",
        "label": true
    }
]

The evaluation measure will be accuracy, with respect to whether the yes/no question was correctly confirmed:

Accuracy = (the number of queries which were correctly confirmed as true or false)
(the number of all queries)

In addition to the above evaluation measures, ordinal information retrieval measures such as Mean Average Precision and R-precision can be used for discussing the characteristics of the submission results.

Submission Details

Participants are required to submit a paper on their method and experimental results. The participants have to submit via e-mail files containing the results of each task, separately. For each task, participants can submit a maximum of 3 results corresponding to 3 different settings/methods. The code for evaluation is published on google colab (https://colab.research.google.com/drive/1aLK03wNldd4glo9Lh8SpyOH6AQAu0iKe#scrollTo=NpLncXEKpp60).

In this framework, we defined a mentioned input/output data structure and evaluation methods for all 3 tasks.

Note: Participants have a responsibility to warranty their result files have the same required structure.

These examples are outputs of 3 tasks that participants’ model needs to generate for evaluation methods:

Task 1: Legal documents’ Retrieval

[
    {
        "question_id": "q-193",
        "relevant_articles": [
            {
                "law_id": "100/2015/QH13",
                "article_id": "177"
            }
        ]
    },
    ...
]

Task 2: Entailment of Legal Question

[
    {
        "question_id": "q-193",
        "label": false
    },
    ...
]

Task 3: Legal Question Answering

[
    {
        "question_id": "q-193",
        "label": false
    },
    ...
]

At least one of the authors of an accepted paper has to present the paper at the ALQAC workshop of KSE 2021.

The papers authored by the task winners will be included in the main KSE 2021 proceedings if ALQAC organizers admit the paper novelty after the review process.

Papers should conform to the standards set out at the KSE 2021 webpage (section Submission) and be submitted to EasyChair.

Application Details

Potential participants to ALQAC-2021 should respond to this call for participation by submitting an application via: tinyurl.com/ALQACRegistration.

Schedule

July 1, 2021 Call for participation
July 15, 2021 Registration deadline
July 15, 2021 Training data release (via contact email in registration)
August 20, 2021 Testing data release (via contact email in registration)
August 24, 2021 (11:59 p.m GMT+7): Submission deadline of Task 1 and Task 3
September 1, 2021 (11:59 p.m GMT+7): Submission deadline of Task 2
September 5, 2021 Announcements of rankings/assessments 
September 15, 2021 Paper Submission in KSE Workshop: ALQAC
September 18, 2021 Notification of Acceptance
September 25, 2021 Camera-ready Submission
November 10-12, 2021 KSE 2021

Task winners

Task 1: First prize: Aleph, Second prize: AimeLaw

Task 2: Two teams first prize: AimeLaw & Aleph, Second prize: Kodiak

Task 3: First prize: Aleph, two teams second prize: AimeLaw & Dat N

The detailed result of all submissions:

Questions and Further Information

Email: lttung(at)jaist.ac.jp with the subject [ALQAC-2021] <Content>

Program Committee

Nguyen Le Minh, Japan Advanced Institute of Science and Technology (JAIST), Japan

Teeradaj Racharak, Japan Advanced Institute of Science and Technology (JAIST), Japan

Tran Duc Vu,  The Institute of Statistical Mathematics (ISM), Japan

Phan Viet Anh, Le Quy Don Technical University (LQDTU), Vietnam

Nguyen Truong Son, Ho Chi Minh University of Science (VNU-HCMUS), Vietnam

Nguyen Tien Huy, Ho Chi Minh University of Science (VNU-HCMUS), Vietnam

Peerapon Vateekul, Chulalongkorn University (CU), Thailand

Prachya Boonkwan, National Electronics and Computer Technology Center (NECTEC), Thailand

Nguyen Ha Thanh, Japan Advanced Institute of Science and Technology (JAIST), Japan

Le Thanh Tung, Japan Advanced Institute of Science and Technology (JAIST), Japan

Bui Minh Quan, Japan Advanced Institute of Science and Technology (JAIST), Japan

Dang Tran Binh, Japan Advanced Institute of Science and Technology (JAIST), Japan

Vuong Thi Hai Yen, University of Engineering and Technology (VNU-UET), Vietnam

Nguyen Minh Phuong, Japan Advanced Institute of Science and Technology (JAIST), Japan

Nguyen Minh Chau, Japan Advanced Institute of Science and Technology (JAIST), Japan