Skip to content
GitLab
Explorer
Connexion
S'inscrire
Navigation principale
Rechercher ou aller à…
Projet
R
recomsys
Gestion
Activité
Membres
Labels
Programmation
Tickets
Tableaux des tickets
Jalons
Wiki
Code
Requêtes de fusion
Dépôt
Branches
Validations
Étiquettes
Graphe du dépôt
Comparer les révisions
Extraits de code
Compilation
Pipelines
Jobs
Planifications de pipeline
Artéfacts
Déploiement
Releases
Registre de paquets
Registre de conteneur
Registre de modèles
Opération
Environnements
Modules Terraform
Surveillance
Incidents
Analyse
Données d'analyse des chaînes de valeur
Analyse des contributeurs
Données d'analyse CI/CD
Données d'analyse du dépôt
Expériences du modèle
Aide
Aide
Support
Documentation de GitLab
Comparer les forfaits GitLab
Forum de la communauté
Contribuer à GitLab
Donner votre avis
Conditions générales et politique de confidentialité
Raccourcis clavier
?
Extraits de code
Groupes
Projets
Afficher davantage de fils d'Ariane
recommender_system
recomsys
Validations
47eaca7b
Valider
47eaca7b
rédigé
1 year ago
par
Adrien Payen
Parcourir les fichiers
Options
Téléchargements
Correctifs
Plain Diff
commit main
parent
be00dfce
Aucune branche associée trouvée
Branches contenant la validation
Aucune étiquette associée trouvée
Aucune requête de fusion associée trouvée
Modifications
3
Tout étendre
Masquer les modifications d'espaces
En ligne
Côte à côte
Affichage de
3 fichiers modifiés
evaluator.ipynb
+24
-24
24 ajouts, 24 suppressions
evaluator.ipynb
loaders.py
+3
-3
3 ajouts, 3 suppressions
loaders.py
user_based.ipynb
+588
-57
588 ajouts, 57 suppressions
user_based.ipynb
avec
615 ajouts
et
84 suppressions
evaluator.ipynb
+
24
−
24
Voir le fichier @
47eaca7b
...
@@ -13,7 +13,7 @@
...
@@ -13,7 +13,7 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count":
58
,
"execution_count":
16
,
"id": "6aaf9140",
"id": "6aaf9140",
"metadata": {},
"metadata": {},
"outputs": [
"outputs": [
...
@@ -59,7 +59,7 @@
...
@@ -59,7 +59,7 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count":
59
,
"execution_count":
17
,
"id": "d6d82188",
"id": "d6d82188",
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
...
@@ -193,7 +193,7 @@
...
@@ -193,7 +193,7 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count":
60
,
"execution_count":
18
,
"id": "f1849e55",
"id": "f1849e55",
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
...
@@ -246,7 +246,7 @@
...
@@ -246,7 +246,7 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count":
61
,
"execution_count":
20
,
"id": "704f4d2a",
"id": "704f4d2a",
"metadata": {},
"metadata": {},
"outputs": [
"outputs": [
...
@@ -311,31 +311,31 @@
...
@@ -311,31 +311,31 @@
" <tbody>\n",
" <tbody>\n",
" <tr>\n",
" <tr>\n",
" <th>baseline_1</th>\n",
" <th>baseline_1</th>\n",
" <td>1.5
44940
</td>\n",
" <td>1.5
67221
</td>\n",
" <td>1.7
76982
</td>\n",
" <td>1.7
88369
</td>\n",
" <td>0.
112150
</td>\n",
" <td>0.
074766
</td>\n",
" <td>99.405607</td>\n",
" <td>99.405607</td>\n",
" </tr>\n",
" </tr>\n",
" <tr>\n",
" <tr>\n",
" <th>baseline_2</th>\n",
" <th>baseline_2</th>\n",
" <td>1.
491063
</td>\n",
" <td>1.
502872
</td>\n",
" <td>1.84
4761
</td>\n",
" <td>1.84
0696
</td>\n",
" <td>0.0
09346
</td>\n",
" <td>0.0
56075
</td>\n",
" <td>429.942991</td>\n",
" <td>429.942991</td>\n",
" </tr>\n",
" </tr>\n",
" <tr>\n",
" <tr>\n",
" <th>baseline_3</th>\n",
" <th>baseline_3</th>\n",
" <td>0.8
68139
</td>\n",
" <td>0.8
73993
</td>\n",
" <td>1.0
66303
</td>\n",
" <td>1.0
76982
</td>\n",
" <td>0.0
74766
</td>\n",
" <td>0.0
65421
</td>\n",
" <td>99.405607</td>\n",
" <td>99.405607</td>\n",
" </tr>\n",
" </tr>\n",
" <tr>\n",
" <tr>\n",
" <th>baseline_4</th>\n",
" <th>baseline_4</th>\n",
" <td>0.7
27803
</td>\n",
" <td>0.7
30657
</td>\n",
" <td>0.9
27636
</td>\n",
" <td>0.9
38814
</td>\n",
" <td>0.1
58879
</td>\n",
" <td>0.1
86916
</td>\n",
" <td>57.
328037
</td>\n",
" <td>57.
465421
</td>\n",
" </tr>\n",
" </tr>\n",
" </tbody>\n",
" </tbody>\n",
"</table>\n",
"</table>\n",
...
@@ -343,13 +343,13 @@
...
@@ -343,13 +343,13 @@
],
],
"text/plain": [
"text/plain": [
" mae rmse hit_rate novelty\n",
" mae rmse hit_rate novelty\n",
"baseline_1 1.5
44940 1.776982 0.112150
99.405607\n",
"baseline_1 1.5
67221 1.788369 0.074766
99.405607\n",
"baseline_2 1.
491063
1.84
4761
0.0
09346
429.942991\n",
"baseline_2 1.
502872
1.84
0696
0.0
56075
429.942991\n",
"baseline_3 0.8
68139 1.066303 0.074766
99.405607\n",
"baseline_3 0.8
73993 1.076982 0.065421
99.405607\n",
"baseline_4 0.7
27803 0.927636 0.158879 57.328037
"
"baseline_4 0.7
30657 0.938814 0.186916 57.465421
"
]
]
},
},
"execution_count":
61
,
"execution_count":
20
,
"metadata": {},
"metadata": {},
"output_type": "execute_result"
"output_type": "execute_result"
}
}
...
@@ -372,7 +372,7 @@
...
@@ -372,7 +372,7 @@
"}\n",
"}\n",
"\n",
"\n",
"sp_ratings = load_ratings(surprise_format=True)\n",
"sp_ratings = load_ratings(surprise_format=True)\n",
"precomputed_dict = precomputed_information(pd.read_csv(\"
../
data/tiny/evidence/ratings.csv\"))\n",
"precomputed_dict = precomputed_information(pd.read_csv(\"data/tiny/evidence/ratings.csv\"))\n",
"evaluation_report = create_evaluation_report(EvalConfig, sp_ratings, precomputed_dict, AVAILABLE_METRICS)\n",
"evaluation_report = create_evaluation_report(EvalConfig, sp_ratings, precomputed_dict, AVAILABLE_METRICS)\n",
"export_evaluation_report(evaluation_report)"
"export_evaluation_report(evaluation_report)"
]
]
...
@@ -394,7 +394,7 @@
...
@@ -394,7 +394,7 @@
"name": "python",
"name": "python",
"nbconvert_exporter": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"pygments_lexer": "ipython3",
"version": "3.1
1.8
"
"version": "3.1
2.2
"
}
}
},
},
"nbformat": 4,
"nbformat": 4,
...
...
%% Cell type:markdown id:a665885b tags:
%% Cell type:markdown id:a665885b tags:
# Evaluator Module
# Evaluator Module
The Evaluator module creates evaluation reports.
The Evaluator module creates evaluation reports.
Reports contain evaluation metrics depending on models specified in the evaluation config.
Reports contain evaluation metrics depending on models specified in the evaluation config.
%% Cell type:code id:6aaf9140 tags:
%% Cell type:code id:6aaf9140 tags:
```
python
```
python
# reloads modules automatically before entering the execution of code
# reloads modules automatically before entering the execution of code
%
load_ext
autoreload
%
load_ext
autoreload
%
autoreload
2
%
autoreload
2
# third parties imports
# third parties imports
import
numpy
as
np
import
numpy
as
np
import
pandas
as
pd
import
pandas
as
pd
# -- add new imports here --
# -- add new imports here --
# local imports
# local imports
from
configs
import
EvalConfig
from
configs
import
EvalConfig
from
constants
import
Constant
as
C
from
constants
import
Constant
as
C
from
loaders
import
export_evaluation_report
from
loaders
import
export_evaluation_report
from
loaders
import
load_ratings
from
loaders
import
load_ratings
# -- add new imports here --
# -- add new imports here --
from
surprise.model_selection
import
train_test_split
from
surprise.model_selection
import
train_test_split
from
surprise
import
accuracy
from
surprise
import
accuracy
from
surprise.model_selection
import
LeaveOneOut
from
surprise.model_selection
import
LeaveOneOut
from
collections
import
Counter
from
collections
import
Counter
```
```
%% Output
%% Output
The autoreload extension is already loaded. To reload it, use:
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
%reload_ext autoreload
%% Cell type:markdown id:d47c24a4 tags:
%% Cell type:markdown id:d47c24a4 tags:
# 1. Model validation functions
# 1. Model validation functions
Validation functions are a way to perform crossvalidation on recommender system models.
Validation functions are a way to perform crossvalidation on recommender system models.
%% Cell type:code id:d6d82188 tags:
%% Cell type:code id:d6d82188 tags:
```
python
```
python
def
generate_split_predictions
(
algo
,
ratings_dataset
,
eval_config
):
def
generate_split_predictions
(
algo
,
ratings_dataset
,
eval_config
):
"""
Generate predictions on a random test set specified in eval_config
"""
"""
Generate predictions on a random test set specified in eval_config
"""
# -- implement the function generate_split_predictions --
# -- implement the function generate_split_predictions --
# Spliting the data into train and test sets
# Spliting the data into train and test sets
trainset
,
testset
=
train_test_split
(
ratings_dataset
,
test_size
=
eval_config
.
test_size
)
trainset
,
testset
=
train_test_split
(
ratings_dataset
,
test_size
=
eval_config
.
test_size
)
# Training the algorithm on the train data set
# Training the algorithm on the train data set
algo
.
fit
(
trainset
)
algo
.
fit
(
trainset
)
# Predict ratings for the testset
# Predict ratings for the testset
predictions
=
algo
.
test
(
testset
)
predictions
=
algo
.
test
(
testset
)
return
predictions
return
predictions
def
generate_loo_top_n
(
algo
,
ratings_dataset
,
eval_config
):
def
generate_loo_top_n
(
algo
,
ratings_dataset
,
eval_config
):
"""
Generate top-n recommendations for each user on a random Leave-one-out split (LOO)
"""
"""
Generate top-n recommendations for each user on a random Leave-one-out split (LOO)
"""
# -- implement the function generate_loo_top_n --
# -- implement the function generate_loo_top_n --
# Create a LeaveOneOut split
# Create a LeaveOneOut split
loo
=
LeaveOneOut
(
n_splits
=
1
)
loo
=
LeaveOneOut
(
n_splits
=
1
)
for
trainset
,
testset
in
loo
.
split
(
ratings_dataset
):
for
trainset
,
testset
in
loo
.
split
(
ratings_dataset
):
algo
.
fit
(
trainset
)
# Train the algorithm on the training set
algo
.
fit
(
trainset
)
# Train the algorithm on the training set
anti_testset
=
trainset
.
build_anti_testset
()
# Build the anti test-set
anti_testset
=
trainset
.
build_anti_testset
()
# Build the anti test-set
predictions
=
algo
.
test
(
anti_testset
)
# Get predictions on the anti test-set
predictions
=
algo
.
test
(
anti_testset
)
# Get predictions on the anti test-set
top_n
=
{}
top_n
=
{}
for
uid
,
iid
,
_
,
est
,
_
in
predictions
:
for
uid
,
iid
,
_
,
est
,
_
in
predictions
:
if
uid
not
in
top_n
:
if
uid
not
in
top_n
:
top_n
[
uid
]
=
[]
top_n
[
uid
]
=
[]
top_n
[
uid
].
append
((
iid
,
est
))
top_n
[
uid
].
append
((
iid
,
est
))
for
uid
,
user_ratings
in
top_n
.
items
():
for
uid
,
user_ratings
in
top_n
.
items
():
user_ratings
.
sort
(
key
=
lambda
x
:
x
[
1
],
reverse
=
True
)
user_ratings
.
sort
(
key
=
lambda
x
:
x
[
1
],
reverse
=
True
)
top_n
[
uid
]
=
user_ratings
[:
eval_config
.
top_n_value
]
# Get top-N recommendations
top_n
[
uid
]
=
user_ratings
[:
eval_config
.
top_n_value
]
# Get top-N recommendations
anti_testset_top_n
=
top_n
anti_testset_top_n
=
top_n
return
anti_testset_top_n
,
testset
return
anti_testset_top_n
,
testset
def
generate_full_top_n
(
algo
,
ratings_dataset
,
eval_config
):
def
generate_full_top_n
(
algo
,
ratings_dataset
,
eval_config
):
"""
Generate top-n recommendations for each user with full training set (LOO)
"""
"""
Generate top-n recommendations for each user with full training set (LOO)
"""
full_trainset
=
ratings_dataset
.
build_full_trainset
()
# Build the full training set
full_trainset
=
ratings_dataset
.
build_full_trainset
()
# Build the full training set
algo
.
fit
(
full_trainset
)
# Train the algorithm on the full training set
algo
.
fit
(
full_trainset
)
# Train the algorithm on the full training set
anti_testset
=
full_trainset
.
build_anti_testset
()
# Build the anti test-set
anti_testset
=
full_trainset
.
build_anti_testset
()
# Build the anti test-set
predictions
=
algo
.
test
(
anti_testset
)
# Get predictions on the anti test-set
predictions
=
algo
.
test
(
anti_testset
)
# Get predictions on the anti test-set
top_n
=
{}
top_n
=
{}
for
uid
,
iid
,
_
,
est
,
_
in
predictions
:
for
uid
,
iid
,
_
,
est
,
_
in
predictions
:
if
uid
not
in
top_n
:
if
uid
not
in
top_n
:
top_n
[
uid
]
=
[]
top_n
[
uid
]
=
[]
top_n
[
uid
].
append
((
iid
,
est
))
top_n
[
uid
].
append
((
iid
,
est
))
for
uid
,
user_ratings
in
top_n
.
items
():
for
uid
,
user_ratings
in
top_n
.
items
():
user_ratings
.
sort
(
key
=
lambda
x
:
x
[
1
],
reverse
=
True
)
user_ratings
.
sort
(
key
=
lambda
x
:
x
[
1
],
reverse
=
True
)
top_n
[
uid
]
=
user_ratings
[:
eval_config
.
top_n_value
]
# Get top-N recommendations
top_n
[
uid
]
=
user_ratings
[:
eval_config
.
top_n_value
]
# Get top-N recommendations
anti_testset_top_n
=
top_n
anti_testset_top_n
=
top_n
return
anti_testset_top_n
return
anti_testset_top_n
def
precomputed_information
(
movie_data
):
def
precomputed_information
(
movie_data
):
"""
Returns a dictionary that precomputes relevant information for evaluating in full mode
"""
Returns a dictionary that precomputes relevant information for evaluating in full mode
Dictionary keys:
Dictionary keys:
- precomputed_dict[
"
item_to_rank
"
] : contains a dictionary mapping movie ids to rankings
- precomputed_dict[
"
item_to_rank
"
] : contains a dictionary mapping movie ids to rankings
- (-- for your project, add other relevant information here -- )
- (-- for your project, add other relevant information here -- )
"""
"""
# Initialize an empty dictionary to store item_id to rank mapping
# Initialize an empty dictionary to store item_id to rank mapping
item_to_rank
=
{}
item_to_rank
=
{}
# Calculate popularity rank for each movie
# Calculate popularity rank for each movie
ratings_count
=
movie_data
.
groupby
(
'
movieId
'
).
size
().
sort_values
(
ascending
=
False
)
ratings_count
=
movie_data
.
groupby
(
'
movieId
'
).
size
().
sort_values
(
ascending
=
False
)
# Assign ranks to movies based on their popularity
# Assign ranks to movies based on their popularity
for
rank
,
(
movie_id
,
_
)
in
enumerate
(
ratings_count
.
items
(),
start
=
1
):
for
rank
,
(
movie_id
,
_
)
in
enumerate
(
ratings_count
.
items
(),
start
=
1
):
item_to_rank
[
movie_id
]
=
rank
item_to_rank
[
movie_id
]
=
rank
# Create the precomputed dictionary
# Create the precomputed dictionary
precomputed_dict
=
{}
precomputed_dict
=
{}
precomputed_dict
[
"
item_to_rank
"
]
=
item_to_rank
precomputed_dict
[
"
item_to_rank
"
]
=
item_to_rank
return
precomputed_dict
return
precomputed_dict
def
create_evaluation_report
(
eval_config
,
sp_ratings
,
precomputed_dict
,
available_metrics
):
def
create_evaluation_report
(
eval_config
,
sp_ratings
,
precomputed_dict
,
available_metrics
):
"""
Create a DataFrame evaluating various models on metrics specified in an evaluation config.
"""
Create a DataFrame evaluating various models on metrics specified in an evaluation config.
"""
"""
evaluation_dict
=
{}
evaluation_dict
=
{}
for
model_name
,
model
,
arguments
in
eval_config
.
models
:
for
model_name
,
model
,
arguments
in
eval_config
.
models
:
print
(
f
'
Handling model
{
model_name
}
'
)
print
(
f
'
Handling model
{
model_name
}
'
)
algo
=
model
(
**
arguments
)
algo
=
model
(
**
arguments
)
evaluation_dict
[
model_name
]
=
{}
evaluation_dict
[
model_name
]
=
{}
# Type 1 : split evaluations
# Type 1 : split evaluations
if
len
(
eval_config
.
split_metrics
)
>
0
:
if
len
(
eval_config
.
split_metrics
)
>
0
:
print
(
'
Training split predictions
'
)
print
(
'
Training split predictions
'
)
predictions
=
generate_split_predictions
(
algo
,
sp_ratings
,
eval_config
)
predictions
=
generate_split_predictions
(
algo
,
sp_ratings
,
eval_config
)
for
metric
in
eval_config
.
split_metrics
:
for
metric
in
eval_config
.
split_metrics
:
print
(
f
'
- computing metric
{
metric
}
'
)
print
(
f
'
- computing metric
{
metric
}
'
)
assert
metric
in
available_metrics
[
'
split
'
]
assert
metric
in
available_metrics
[
'
split
'
]
evaluation_function
,
parameters
=
available_metrics
[
"
split
"
][
metric
]
evaluation_function
,
parameters
=
available_metrics
[
"
split
"
][
metric
]
evaluation_dict
[
model_name
][
metric
]
=
evaluation_function
(
predictions
,
**
parameters
)
evaluation_dict
[
model_name
][
metric
]
=
evaluation_function
(
predictions
,
**
parameters
)
# Type 2 : loo evaluations
# Type 2 : loo evaluations
if
len
(
eval_config
.
loo_metrics
)
>
0
:
if
len
(
eval_config
.
loo_metrics
)
>
0
:
print
(
'
Training loo predictions
'
)
print
(
'
Training loo predictions
'
)
anti_testset_top_n
,
testset
=
generate_loo_top_n
(
algo
,
sp_ratings
,
eval_config
)
anti_testset_top_n
,
testset
=
generate_loo_top_n
(
algo
,
sp_ratings
,
eval_config
)
for
metric
in
eval_config
.
loo_metrics
:
for
metric
in
eval_config
.
loo_metrics
:
assert
metric
in
available_metrics
[
'
loo
'
]
assert
metric
in
available_metrics
[
'
loo
'
]
evaluation_function
,
parameters
=
available_metrics
[
"
loo
"
][
metric
]
evaluation_function
,
parameters
=
available_metrics
[
"
loo
"
][
metric
]
evaluation_dict
[
model_name
][
metric
]
=
evaluation_function
(
anti_testset_top_n
,
testset
,
**
parameters
)
evaluation_dict
[
model_name
][
metric
]
=
evaluation_function
(
anti_testset_top_n
,
testset
,
**
parameters
)
# Type 3 : full evaluations
# Type 3 : full evaluations
if
len
(
eval_config
.
full_metrics
)
>
0
:
if
len
(
eval_config
.
full_metrics
)
>
0
:
print
(
'
Training full predictions
'
)
print
(
'
Training full predictions
'
)
anti_testset_top_n
=
generate_full_top_n
(
algo
,
sp_ratings
,
eval_config
)
anti_testset_top_n
=
generate_full_top_n
(
algo
,
sp_ratings
,
eval_config
)
for
metric
in
eval_config
.
full_metrics
:
for
metric
in
eval_config
.
full_metrics
:
assert
metric
in
available_metrics
[
'
full
'
]
assert
metric
in
available_metrics
[
'
full
'
]
evaluation_function
,
parameters
=
available_metrics
[
"
full
"
][
metric
]
evaluation_function
,
parameters
=
available_metrics
[
"
full
"
][
metric
]
evaluation_dict
[
model_name
][
metric
]
=
evaluation_function
(
evaluation_dict
[
model_name
][
metric
]
=
evaluation_function
(
anti_testset_top_n
,
anti_testset_top_n
,
**
precomputed_dict
,
**
precomputed_dict
,
**
parameters
**
parameters
)
)
return
pd
.
DataFrame
.
from_dict
(
evaluation_dict
).
T
return
pd
.
DataFrame
.
from_dict
(
evaluation_dict
).
T
```
```
%% Cell type:markdown id:f7e83d1d tags:
%% Cell type:markdown id:f7e83d1d tags:
# 2. Evaluation metrics
# 2. Evaluation metrics
Implement evaluation metrics for either rating predictions (split metrics) or for top-n recommendations (loo metric, full metric)
Implement evaluation metrics for either rating predictions (split metrics) or for top-n recommendations (loo metric, full metric)
%% Cell type:code id:f1849e55 tags:
%% Cell type:code id:f1849e55 tags:
```
python
```
python
def
get_hit_rate
(
anti_testset_top_n
,
testset
):
def
get_hit_rate
(
anti_testset_top_n
,
testset
):
"""
Compute the average hit over the users (loo metric)
"""
Compute the average hit over the users (loo metric)
A hit (1) happens when the movie in the testset has been picked by the top-n recommender
A hit (1) happens when the movie in the testset has been picked by the top-n recommender
A fail (0) happens when the movie in the testset has not been picked by the top-n recommender
A fail (0) happens when the movie in the testset has not been picked by the top-n recommender
"""
"""
# -- implement the function get_hit_rate --
# -- implement the function get_hit_rate --
hits
=
0
hits
=
0
total_users
=
len
(
testset
)
total_users
=
len
(
testset
)
for
uid
,
true_iid
,
_
in
testset
:
for
uid
,
true_iid
,
_
in
testset
:
if
uid
in
anti_testset_top_n
and
true_iid
in
{
iid
for
iid
,
_
in
anti_testset_top_n
[
uid
]}:
if
uid
in
anti_testset_top_n
and
true_iid
in
{
iid
for
iid
,
_
in
anti_testset_top_n
[
uid
]}:
hits
+=
1
hits
+=
1
hit_rate
=
hits
/
total_users
hit_rate
=
hits
/
total_users
return
hit_rate
return
hit_rate
def
get_novelty
(
anti_testset_top_n
,
item_to_rank
):
def
get_novelty
(
anti_testset_top_n
,
item_to_rank
):
"""
Compute the average novelty of the top-n recommendation over the users (full metric)
"""
Compute the average novelty of the top-n recommendation over the users (full metric)
The novelty is defined as the average ranking of the movies recommended
The novelty is defined as the average ranking of the movies recommended
"""
"""
# -- implement the function get_novelty --
# -- implement the function get_novelty --
total_rank_sum
=
0
total_rank_sum
=
0
total_recommendations
=
0
total_recommendations
=
0
for
uid
,
recommendations
in
anti_testset_top_n
.
items
():
for
uid
,
recommendations
in
anti_testset_top_n
.
items
():
for
iid
,
_
in
recommendations
:
for
iid
,
_
in
recommendations
:
if
iid
in
item_to_rank
:
if
iid
in
item_to_rank
:
total_rank_sum
+=
item_to_rank
[
iid
]
total_rank_sum
+=
item_to_rank
[
iid
]
total_recommendations
+=
1
total_recommendations
+=
1
if
total_recommendations
==
0
:
if
total_recommendations
==
0
:
return
0
# Avoid division by zero
return
0
# Avoid division by zero
average_rank_sum
=
total_rank_sum
/
total_recommendations
average_rank_sum
=
total_rank_sum
/
total_recommendations
return
average_rank_sum
return
average_rank_sum
```
```
%% Cell type:markdown id:1a9855b3 tags:
%% Cell type:markdown id:1a9855b3 tags:
# 3. Evaluation workflow
# 3. Evaluation workflow
Load data, evaluate models and save the experimental outcomes
Load data, evaluate models and save the experimental outcomes
%% Cell type:code id:704f4d2a tags:
%% Cell type:code id:704f4d2a tags:
```
python
```
python
AVAILABLE_METRICS
=
{
AVAILABLE_METRICS
=
{
"
split
"
:
{
"
split
"
:
{
"
mae
"
:
(
accuracy
.
mae
,
{
'
verbose
'
:
False
}),
"
mae
"
:
(
accuracy
.
mae
,
{
'
verbose
'
:
False
}),
"
rmse
"
:
(
accuracy
.
rmse
,
{
'
verbose
'
:
False
})
"
rmse
"
:
(
accuracy
.
rmse
,
{
'
verbose
'
:
False
})
# Add new split metrics here if needed
# Add new split metrics here if needed
},
},
"
loo
"
:
{
"
loo
"
:
{
"
hit_rate
"
:
(
get_hit_rate
,
{}),
"
hit_rate
"
:
(
get_hit_rate
,
{}),
# Add new loo metrics here if needed
# Add new loo metrics here if needed
},
},
"
full
"
:
{
"
full
"
:
{
"
novelty
"
:
(
get_novelty
,
{}),
"
novelty
"
:
(
get_novelty
,
{}),
# Add new full metrics here if needed
# Add new full metrics here if needed
}
}
}
}
sp_ratings
=
load_ratings
(
surprise_format
=
True
)
sp_ratings
=
load_ratings
(
surprise_format
=
True
)
precomputed_dict
=
precomputed_information
(
pd
.
read_csv
(
"
../
data/tiny/evidence/ratings.csv
"
))
precomputed_dict
=
precomputed_information
(
pd
.
read_csv
(
"
data/tiny/evidence/ratings.csv
"
))
evaluation_report
=
create_evaluation_report
(
EvalConfig
,
sp_ratings
,
precomputed_dict
,
AVAILABLE_METRICS
)
evaluation_report
=
create_evaluation_report
(
EvalConfig
,
sp_ratings
,
precomputed_dict
,
AVAILABLE_METRICS
)
export_evaluation_report
(
evaluation_report
)
export_evaluation_report
(
evaluation_report
)
```
```
%% Output
%% Output
Handling model baseline_1
Handling model baseline_1
Training split predictions
Training split predictions
- computing metric mae
- computing metric mae
- computing metric rmse
- computing metric rmse
Training loo predictions
Training loo predictions
Training full predictions
Training full predictions
Handling model baseline_2
Handling model baseline_2
Training split predictions
Training split predictions
- computing metric mae
- computing metric mae
- computing metric rmse
- computing metric rmse
Training loo predictions
Training loo predictions
Training full predictions
Training full predictions
Handling model baseline_3
Handling model baseline_3
Training split predictions
Training split predictions
- computing metric mae
- computing metric mae
- computing metric rmse
- computing metric rmse
Training loo predictions
Training loo predictions
Training full predictions
Training full predictions
Handling model baseline_4
Handling model baseline_4
Training split predictions
Training split predictions
- computing metric mae
- computing metric mae
- computing metric rmse
- computing metric rmse
Training loo predictions
Training loo predictions
Training full predictions
Training full predictions
The data has been exported to the evaluation report
The data has been exported to the evaluation report
mae rmse hit_rate novelty
mae rmse hit_rate novelty
baseline_1 1.5
44940 1.776982 0.112150
99.405607
baseline_1 1.5
67221 1.788369 0.074766
99.405607
baseline_2 1.
491063
1.84
4761
0.0
09346
429.942991
baseline_2 1.
502872
1.84
0696
0.0
56075
429.942991
baseline_3 0.8
68139 1.066303 0.074766
99.405607
baseline_3 0.8
73993 1.076982 0.065421
99.405607
baseline_4 0.7
27803 0.927636 0.158879 57.328037
baseline_4 0.7
30657 0.938814 0.186916 57.465421
...
...
Ce diff est replié.
Cliquez pour l'agrandir.
loaders.py
+
3
−
3
Voir le fichier @
47eaca7b
# Third-party imports
# Third-party imports
import
pandas
as
pd
import
pandas
as
pd
import
os
import
os
from
pprint
import
pprint
as
pp
# import display
# Local imports
# Local imports
from
constants
import
Constant
as
C
from
constants
import
Constant
as
C
...
@@ -24,7 +23,8 @@ def load_ratings(surprise_format=False):
...
@@ -24,7 +23,8 @@ def load_ratings(surprise_format=False):
return
surprise_data
return
surprise_data
else
:
else
:
return
df_ratings
return
df_ratings
print
(
load_ratings
())
def
load_items
():
def
load_items
():
"""
Loads items data.
"""
Loads items data.
...
...
Ce diff est replié.
Cliquez pour l'agrandir.
user_based.ipynb
+
588
−
57
Voir le fichier @
47eaca7b
Ce diff est replié.
Cliquez pour l'agrandir.
Aperçu
0%
Chargement en cours
Veuillez réessayer
ou
joindre un nouveau fichier
.
Annuler
You are about to add
0
people
to the discussion. Proceed with caution.
Terminez d'abord l'édition de ce message.
Enregistrer le commentaire
Annuler
Veuillez vous
inscrire
ou vous
se connecter
pour commenter