File size: 6,390 Bytes
df01667
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e7ba7c
7edcb35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df01667
e8e5150
671fbc4
 
603d555
df01667
 
671fbc4
df01667
6036f03
df01667
 
 
 
 
fe5aea5
df01667
fe5aea5
 
df01667
 
 
7edcb35
 
9d9bb41
 
 
 
df01667
6036f03
df01667
671fbc4
df01667
 
 
fe5aea5
 
 
 
 
 
 
 
 
 
 
 
 
671fbc4
 
fe5aea5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df01667
 
 
 
 
 
 
 
 
 
 
 
 
 
 
671fbc4
df01667
7edcb35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
license: etalab-2.0
task_categories:
  - text-generation
  - text-classification
  - zero-shot-classification
  - sentence-similarity
  - feature-extraction
language:
  - fr
tags:
  - legal
  - justice
  - rulings
  - French
  - français
  - jurisprudence
pretty_name: Jurisprudence
configs:
  - config_name: default
    default: true
    data_files:
      - split: tribunal_judiciaire
        path: "tribunal_judiciaire.parquet"
      - split: cour_d_appel
        path: "cour_d_appel.parquet"
      - split: cour_de_cassation
        path: "cour_de_cassation.parquet"
  - config_name: tribunal_judiciaire
    data_files: "tribunal_judiciaire.parquet"
  - config_name: cour_d_appel
    data_files: "cour_d_appel.parquet"
  - config_name: cour_de_cassation
    data_files: "cour_de_cassation.parquet"
---
## Dataset Description
 - **Repository:** https://huggingface.co/datasets/antoinejeannot/jurisprudence
 - **Point of Contact:** [Antoine Jeannot](mailto:antoine.jeannot1002@gmail.com)

<p align="center"><img src="https://raw.githubusercontent.com/antoinejeannot/jurisprudence/artefacts/jurisprudence.svg" width=650></p>

[![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md-dark.svg)](https://huggingface.co/datasets/antoinejeannot/jurisprudence) [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/antoinejeannot/jurisprudence)

# ✨ Jurisprudence, release v2024.09.19 🏛️

Jurisprudence is an open-source project that automates the collection and distribution of French legal decisions. It leverages the Judilibre API provided by the Cour de Cassation to:

- Fetch rulings from major French courts (Cour de Cassation, Cour d'Appel, Tribunal Judiciaire)
- Process and convert the data into easily accessible formats
- Publish & version updated datasets on Hugging Face every few days.

It aims to democratize access to legal information, enabling researchers, legal professionals and the public to easily access and analyze French court decisions.
Whether you're conducting legal research, developing AI models, or simply interested in French jurisprudence, this project might provide a valuable, open resource for exploring the French legal landscape.

## 📊 Exported Data

| Jurisdiction | Jurisprudences | Oldest | Latest | Tokens | JSONL (gzipped) | Parquet |
|--------------|----------------|--------|--------|--------|-----------------|---------|
| Cour d'Appel | 381,768 | 1996-03-25 | 2024-09-13 | 1,911,897,207 | [Download (1.67 GB)](https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_d_appel.jsonl.gz?download=true) | [Download (2.80 GB)](https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_d_appel.parquet?download=true) |
| Tribunal Judiciaire | 65,343 | 2023-12-14 | 2024-09-12 | 234,306,537 | [Download (209.21 MB)](https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/tribunal_judiciaire.jsonl.gz?download=true) | [Download (349.09 MB)](https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/tribunal_judiciaire.parquet?download=true) |
| Cour de Cassation | 534,787 | 1860-08-01 | 2024-09-12 | 1,104,517,382 | [Download (929.35 MB)](https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_de_cassation.jsonl.gz?download=true) | [Download (1.57 GB)](https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_de_cassation.parquet?download=true) |
| **Total** | **981,898** | **1860-08-01** | **2024-09-13** | **3,250,721,126** | **2.79 GB** | **4.71 GB** |

<i>Latest update date: 2024-09-19</i>

<i># Tokens are computed using GPT-4 tiktoken and the `text` column.</i>

## 🤗 Hugging Face Dataset

The up-to-date jurisprudences dataset is available at: https://huggingface.co/datasets/antoinejeannot/jurisprudence in JSONL (gzipped) and parquet formats.

This allows you to easily fetch, query, process and index all jurisprudences in the blink of an eye!

### Usage Examples
#### HuggingFace Datasets
```python
# pip install datasets
import datasets

dataset = load_dataset("antoinejeannot/jurisprudence")
dataset.shape
>> {'tribunal_judiciaire': (58986, 33),
'cour_d_appel': (378392, 33),
'cour_de_cassation': (534258, 33)}

# alternatively, you can load each jurisdiction separately
cour_d_appel = load_dataset("antoinejeannot/jurisprudence", "cour_d_appel")
tribunal_judiciaire = load_dataset("antoinejeannot/jurisprudence", "tribunal_judiciaire")
cour_de_cassation = load_dataset("antoinejeannot/jurisprudence", "cour_de_cassation") 
```

Leveraging datasets allows you to easily ingest data to [PyTorch](https://huggingface.co/docs/datasets/use_with_pytorch), [Tensorflow](https://huggingface.co/docs/datasets/use_with_tensorflow), [Jax](https://huggingface.co/docs/datasets/use_with_jax) etc.

#### BYOL: Bring Your Own Lib
For analysis, using polars, pandas or duckdb is quite common and also possible:
```python
url = "https://huggingface.co/datasets/antoinejeannot/jurisprudence/resolve/main/cour_de_cassation.parquet"  # or tribunal_judiciaire.parquet, cour_d_appel.parquet

# pip install polars
import polars as pl
df = pl.scan_parquet(url)

# pip install pandas
import pandas as pd
df = pd.read_parquet(url)

# pip install duckdb
import duckdb
table = duckdb.read_parquet(url)
```

## 🪪 Citing & Authors

If you use this code in your research, please use the following BibTeX entry:
```bibtex
@misc{antoinejeannot2024,
author = {Jeannot Antoine and {Cour de Cassation}},
title = {Jurisprudence},
year = {2024},
howpublished = {\url{https://github.com/antoinejeannot/jurisprudence}},
note = {Data source: API Judilibre, \url{https://www.data.gouv.fr/en/datasets/api-judilibre/}}
}
```

This project relies on the [Judilibre API par la Cour de Cassation](https://www.data.gouv.fr/en/datasets/api-judilibre/), which is made available under the Open License 2.0 (Licence Ouverte 2.0)

It scans the API every 3 days at midnight UTC and exports its data in various formats to Hugging Face, without any fundamental transformation but conversions.

<p align="center"><a href="https://www.etalab.gouv.fr/licence-ouverte-open-licence/"><img src="https://raw.githubusercontent.com/antoinejeannot/jurisprudence/artefacts/license.png" width=50 alt="license ouverte / open license"></a></p>