An empirical study of natural noise management in group recommendation systems

所需积分/C币:16 2018-04-08 15:24:51 1.66MB PDF

主要提出了四种减少小组推荐中自然噪声的方法,主要思想是如果用户和项目行为一致但与评分结果不同,表面这个评分是个噪声。
J. Castro et aL./ Decision Support Systems 94(2017)1-11 Natural noise management Users\Items 暍|em巴 in? PN Prediction SAn PN sibly noisy classification Not noIsy 向PN Not noisy Corrected LN PN PN original rating variable database database PN: Possible noise (a) general scheme. (b) Classification of the ratings Fig 3. General scheme of natural noise management for individuals [29 Even though these methods are not the most recent, their effe case and removed in the latter. With this purpose they establish the tiveness and simplicity have encouraged their use in many real- consistency between the original rating value and a new value pre world systems. Furthermore, their properties have been extensively dicted by using a recommendation algorithm for the same user-item researched 21 For these reasons, they are used in this paper as the pair. Later on, several authors focused on natural noise from differer single-user RS required as a part of the grs oints of view 3, 24 Previous research is based on different principles, but presents 2. 2. Group recommender systems limitations, such as the removal of information from the dataset [23 or the need of additional information 3, 24 To overcome these lim Group recommendation is currently a research area with increas- itations, recently Yera et al. [29] proposed a two-step method that ing importance because of the diversity of scenarios in which it is requires only the rating matrix (see Fig 3a) useful. To formalize the grs problem, the notation in Appendix a is used Group recomnendation is commonly defined as the item Noise detection: Ratings are tagged as not noisy or possibly or set of items)that maximises the rating prediction for a group of noisy regarding their corresponding user and item behaviour users. g (see Fig 3b). Each rating and its corresponding user and item are classified as high, medium or low. If the user and item Recommendation(, Ga)=arg max Prediction(ik, Ga behaviour are the same and contradict rating classification lk then the rating is tagged as possible noise. There are two basic approaches for group recommendation [ 11, 2. Noise correction: For each possibly noisy rating a prediction is computed for its corresponding user and item If the difference based on single user recommendation between the old and new value exceeds a threshold then the prediction replaces the original value Rating aggregation(see Fig. 1): a pseudo-user is created for the group by aggregating members ratings. The recommendations are generated using this rating profile as the cfrs target user This nnm approach will be used in our proposal for grS because Recommendation aggregation(see Fig. 2): the members indi it only needs the information in the rating matrix(further details vidual recommendations are generated by a CFRS. The GRS in [29). A simple example of its performance is shown based on aggregates them to produce a recommendation list targeted to the ratings shown in Table 1. The user and item classes(cu and ci the group respectively) are the classes with the majority of ratings, defined as low =1, 2 med=3 and high 4, 5. If the majority is not abso lute, then the class is variable. In the example, considering users In all scenarios, neither rating aggregation nor recommendation aggregation outperforms the other [18 So, in order to identify the behaviour cu; and item tendency r i, the ratings classified as possi- best both should be evaluated bly noisy are u, i, Tus, i, and ru, i, because they contradict the user behaviour and item tendency 23. Natural noise 3. Natural noise management in group recommendation The management of unintentionally inconsistent user prefer ences, so called natural noise, is a relatively new research field in The nnm is particularly interesting in GRSs, because it is not clear CFRSs 2, 24. Natural noise appears due to factors such as the change whether the natural noise of the members'ratings also bias the group of preferences over time, individual conditions, rating strategies, recommendation. Therefore, it is important to verify that the nnm and social influences [ 24. Amatriain et al. [2] developed a study to also plays an important role in GRS accuracy. However, NNM in indi- verify that natural noise biases the recommendations. The results vidual RSs cannot be directly applied to GRSs, because of the proper show that the users tend to change their preferences, and that these features of grSs. therefore, different alternatives for nNM in group inconsistencies could affect the recommendation accuracy. recommendation are introduced in this section O'Mahony et al. s [23] was the first research work to use the term To propose a NNM approach in GRS, in the group recommenda natural noise. The authors suggest that if a rating rui is noise-free or tion scenario two levels of data are considered to exist: ( i) local level if it contains natural noise, then it should be retained in the former preferences belonging to the group members, and (ii) global level J. Castro et al./Decision Support Systems 94(2017)1-11 Table 1 Algorithm 2. Procedure for local NNM based on local information Ilustrative example for the classification of the ratings as possibly noisy (NNM-LL Result: RG foreach in Rg, d 5 R Classify(m/, R,m,. i15 5 2 Classify (ik, RGain) if )and( 2345 5 Medium 45 5 5 3551 High High else k Predict(R, mi, ix) 4 2 Cu High HighHighHigh High High 9 return In NNM-LL only the data from the group ga is taken into account The small amount of data used to perform this analysis and correction makes it suitable to be applied online, when the recommendations are requeste preferences of all the users in the entire dataset. The level considered most suitable to perform the nnm should then be studied. Following 3.2. Local natural noise management based on global information this aim four alternatives for nnm are then presented in both levels (NNM-LG) of data NNM-LG approach is depicted in Fig 4b). NNM-LG is similar to the previous approach, in terms of the rating set that it analyses and First, two approaches that focus the nNm on the local level corrects. However, NNM-LG assumes that the ratings in Ga are not before the recommendations. The approaches are local nNM enough to be able to properly classify the items. Therefore it clas- based on local information Nnm-lL and local nnm based on sifies the items using all the information in the dataset, r..(see global information, NNM-LG Table 1) Second, an approach that focuses the nnm on the global level A GRS With the NNM-LG approach follows the general scheme of before the recommendation, disregarding the group to which Algorithm 1. However, for the item classification it uses all the rat each user might belong. The approach is noted as global NNM ings for the item, R.i.(modifying Algorithm 2, line 4). This change approach (NNM-Gg) modifies the item classification Eventually, a cascade hybridisation of the previous approaches With NNM-lg the item profiling uses more information than the is presented (NNM-H) It performs a global NNM approach, and NNM-LL approach This feature is key in recommendations targeted then a local NNM that corrects the group ratings by using the at groups because only a few ratings might be available for a given information already corrected item, which might lead to a different classification of the items In the remainder of this section we use the notation introduced in 3. 3. Global natural noise management (NNM-GG) Appendix a to detail the performance of each NNM approach NNM-GG approach is depicted in Fig. 5. It computes the recommen- dations by applying the nnm to the entire dataset (see Algorithm 3 ). 3. 1. Local natural noise management based on local information All the ratings in the database are analysed and corrected, similarly (NNM-LL) to the NNM applied to single user RSs [29]. Algorithm 4 presents how the NNM-gg approach is applied to the entire dataset. NNM-LL approach is depicted in Fig. 4a). It analyses the ratings of the target group, Ga, and corrects them by using only the infor mation provided by the group members. This approach assumes that Algorithm 3. GRS with global natural noise management Data: ULR G only the preferences associated with the group members should be 1 R=NNMGG(R) taken into account in such characterisation and thus this reduced 2 BuildRecommendation modeI(U,I, R) amount of information night be enough to apply nnm in group 3 foreach G in g do recommendation L recommendations Ga=Recommend(Ga, RG Algorithm 1 computes group recommendation by managing nat- ural noise for each group Ga, using all the ratings of the group re Algorithm 2 adapts the NNm process [29]to group recommendation, i.e., NNM-LL, whose main feature is that the item classification uses Algorithm 4. Procedure for global natural noise management only Ga local information, Ra i, (Algorithm 2, line 4). (NNM-GG) Data: R Result: R foreach ruj k in r do Algorithm 1. GRS with local NNM based on local information 2 Classify(ru, g) (NNM-LL Data: U. L.R. G 4 Ci=Classify(ik, R k) I BuildRecommendationModeI(U, I, R) s if (cu,=Ci)and(cu, F Cru 2 foreach G, in g do 3 RG.-NNMLL(Ga, RGa,) Pe气adl( M,*variable)then commendations Ga= Recommend(Ga, Rg.) r s return R 9 return rs J. Castro et aL./ Decision Support Systems 94(2017)1-11 Original data Natural noise management Group recommendation Top-N items for group Group Corrected ratings NNM-LL group Group top-1 會中 ratings recommender system ltop-2lI 會會 Rating NNM-LG database Fig 4. Scheme of local NNM for GRS, showing two approaches: a) NNM-LL, b)NNM-LG Original data Natural noise management(NNM-GG)Group recommendation Top-N items for group Rating ting database rou database recommender 厂 system NNM-GG Grou Corrected group ratings Fig. 5. Global natural noise management based on global information (NNM- GG)application. Due to the number of ratings being revised, the NNM-GG 3.5. Illustrative example approach must be applied offline. However, it might result in a better NNM, because the recommendation model is built with a prepro This section presents an illustrative example of the different cessed database, thus the natural noise influence in the model might approaches for NNM in GrS. With this aim in mind, the example uses the data shown in table 1, in which the target group ga is composed of users u1,uz, and u3. 3.4. Hybrid globul-locul natural noise management(NNM-H) For NNM-Ll only the ratings of the group members are evaluated Each item is classified using the group ratings, RGa, ik. Therefore, th NNM-H approach is depicted in Fig. 6. This approach combines possibly noisy rating is r NNM-GG and NNM-LG. The reason for this hybridisation is that once For NNM-LG the rating set revised is also RGa k, but the item clas- the NNM-GG is applied, the classification of the group ratings and sification is done with the complete dataset, R.i.. The possible noisy their correction might be affected by the modifications made in the ratings are rur ig, and rus ji3 initial dataset. Therefore, NNM-LG is then applied to revise the group For NNM-GG, all the ratings in the database are revised, and the ratings using the corrected ratings dataset. item classification is done with the complete dataset, R i, The possi- Algorithm 5 presents nnm-h in which the entire dataset is ini- ble noisy ratings are u;, Aa Yu3, 3,and rugy which include the possible tially corrected (line 1), and a local correction is then performed on noisy detected by NNM-LG the group ratings(line 4)to analyse the ratings regarding the revised For NNM-H, the first step is to apply NNM-GG and correct the dataset possibly noisy ratings, results are shown in Table 2, R. The second step is to apply NNM-LG over RG., considering the corrected ratings database. The possibly noisy rating of this second step is ra ji Algorithm 5 GRS with hybrid natural noise management (NNM-H 4. Case stud Data: ULRG 1 R=NNMGG() To measure the effect of previous nNM approaches in grSs per 2 BuildRecommendationModel(U, L, R) formance an experimental procedure is used to evaluate them. 3 foreach Ga in G do This section presents the experimental protocol used in the experi 4Rt。= NNMLG(Ga,Rn) ments,and the results are then presented and discussed to verify the 5 recommendations Ga=Recommend(Ga, R hypotheses introduced in Section 1 J. Castro et aL./ Support Systems 94(2017)1-11 Original data Natural noise management(NNM-H) Group recommendation Top-n items Corrected Rating Gro database database recommender systen NNM-GG NNM-LG Second ste corrected Ing group ratings Fig. 6. Hybrid global-local natural noise management(NNM-H)application 4.1. Experimental protocol The datasets used in this case study are This research applies a widely-used evaluation protocol [1 1 for The movielens 100k dataset, composed of a 100.000 ratings RSS, which is composed of four main steps: (i)compute the training given by 943 users over 1682 movies in the one to five stars and test partitions of the original dataset, (ii)generate the groups domain randomly, (iii) generate the group recommendation for each group The Netflix Tiny dataset, composed of 4427 users, 1000 items, using the training set, and (iv) individually evaluate the recommen and 56, 136 ratings, in the same domain due to the high spar dations, as in single-user recommendation, by comparing the group sity of this dataset we use only those users with more than 10 commendations with the users' ratings in the test set ratings. Specifically, 1757 users are used The design of a group recommendation algorithm must identify The case study is focused on evaluating the techniques in occa The grs approach which the algorithm uses sional groups, therefore the users dre grouped randomly, as done in The aggregation scheme used to combine the members previous research on GRS[11 information Regarding the amount of available users in the dataset, each NNm single-user RS that the grS uses internally approach (incorporating an aggregation approach and a single-user recommender)is evaluated using 50 randomly generated groups, Section 2.2 introduced the group recommendation approaches and this procedure is repeated 20 times. The evaluation measure to be evaluated: rating aggregation and recommendation aggregation. MAE is used for all cases. Because each approach manages the users' preferences in a differ- The sizes of the groups are 5, 10 and 15 for each evaluation larger ent way, one could expect that the effect of the Nnm algorithm in a groups are excluded because they are not used in these kinds of GRS strongly depends on whether it is based on rating or recommen- experimental scenarios [11, 121 dation aggregation. The effects of the Nnm proposals (section 3)on Below, the performance of the nnm approaches described in each GRS approach will be analysed separately Section 3 is evaluated in a recommendation aggregation-based GrS Regarding the aggregation approaches, De Pessemier et with the mentioned aggregation schemes and single-user RSs. Then refer to different methods used to aggregate the preferences of the Section 4.3 develops a similar study for rating aggregation-based group. Several works [5, 12] have pointed out that the average(Avg) GrS. Finally, Section 4.4 discusses the results and least misery(Min )approaches obtain the best results. Therefore, these aggregation approaches are used in our evaluation 4.2. Results in recommendation aggregation-based GRS Finally, the experimental procedure evaluates the NNM approaches presented, comparing them with baseline approaches This section presents the results of the nnm for grs based on red which do not perform a NNM, and single user RS: the Resnicks ommendation aggregation. Table 3 shows the experimental results user-based(UB)and the Sarwar's item-based(IB)CF methods(see for MovieLens 100k and Netflix Tiny datasets, regarding the men- Section 2.1). tioned aggregation approaches(Avg and Min ) and single-user RSS(iB and UB). For each case, the best NNM result for each GRS has been highlighted Table 2 In general, the results clearly show that NNm leads to better Rating database after the NNM-GG correction in Table 1. The values corrected by performance of the group recommendation algorithms, but such NNM GG are highlighted in bold. This rating database is R" and it is used as input for improvement is closely associated with the NNM approach NNM-IG to produce Ro NNM-LL. All cases show that the application of the NnM- LL approach does not imply a significant improvement in performance. Specifically, NNM-LL obtains a similar perfor l4 mance to that of the baseline. This therefore suggests that the l1 5 3 14 55354 5 5 1 Collected by GroupLens Research Project at the University of Minnesota 2 (http:i/grouplens.o1g) H High High High High High 2 Small version of Netflix dataset, in Personalised Recommendation Algorithms Toolkit(http://prea.gatech.edu) J. Castro et al./ Decision Support Systems 94 (2017)1-11 Table 3 MAE results for the recommendation aggregation approach Dataset rediction technique Group size NNM-LL NNM-LG NNM-GG NNM-H Moviclens B+ Avg C.8747 0.860 0.858 08998 0.8996 0.8837 0.8804 9079 09036 0.8913 IB+ Mi 10218 1.0214 10137 09983 1.1403 1.1328 1.12 1.2066 1.2066 1.1959 1.1918 1.1830 UB Avg 0.8053 0.8053 0.8051 0.8127 08125 0.7932 0.7931 0.8146 0.8145 0.8145 0.7946 0.7946 UB +Min 0.8421 0.8419 0.8399 0.8190 0.81 08698 0.8674 0.8463 0.8444 0.8844 0.8815 0.8593 0.8572 Netflix IB+ Avg 0.8435 0.8431 0.8415 0.8386 08368 10 0.8630 0.8627 0.8598 0.8572 08542 0.8627 0.8626 0.8595 0.8569 0.8539 IB+ Min 10074 1.0025 10011 0.9963 1.1451 1.1445 1.1330 1.1262 12252 1.2251 1.2117 1.2169 1.2046 UB + Avg 5 0.8127 0.8128 0.8130 0.8060 0.8062 08129 08129 UB+ Min 0.8616 0.8615 08609 0.8538 0.8532 09034 0.9033 09020 0.8945 0.8934 09192 0.9192 0.9187 0.9108 09103 use of local information is not enough to be able to properly NNM approach provides as compared to the baseline are differ characterise the items and manage the natural noise of the ent, and thus the results of each technique are analysed separately group information. NNM-LG. In contrast to NNM-LL, the nNm-lg approach, over- l, produces a slight improvement as compared to the baseline NNM-LL. The results do not show improvements as com method For IB+Min the improvement observed is greater, pared to the baseline. Therefore again suggests that the specifically it is around 0.01 better. However, for UB +Avg it use of local information is not enough to properly charac does not provide a significant improvement, since the mae dif- terse the items and manage the natural noise of the group ference is less than 0.001. in the remaining cases the results information for NNM-lg show a narrow improvement. These facts show NNM-LG. In general, the NNM-LG approach provides very nar that NNM-LG performs better than NNM-LL, which suggests row improvements for the baseline. Specifically, it only pro that using more information to characterise the items provides vides improvements for the IB +Min GRS approach for big improvements in the Nnm for recommendation aggregation groups in both datasets. This improvement might be due to NNM-GG. The results show that the nnm of the entire dataset the differences in mae which the baseline obtains for ib+Min outperforms the local correction. Considering that the nat- and IB+Avg, thus the nnm approach has a greater margin for ural noise is distributed across all the dataset the nnm Inprovement. approaches described in Section 3 were expected to cause NNM-GG. The NNM-GG approach improves the accuracy of an improvement in this scenario, as it has been demon- the baseline and the local approaches, NNM-LL and NNM-Lc strated that the nnm in single-user recommendation intro- This behaviour might be due to the amount of ratings being duces a performance improvement [29]. In general, NNM-CG analysed and corrected obtains improvements when compared to the baseline and NNM-H. Overall, the results of the NNM-H approach place it as ocal approaches the best NNM approach for rating aggregation. NNM-H. the best performance for the evaluated cases is obtained by the NnM-h approach. The results show that the hybridisation of NNM-GG and NNM-LG outperforms their own 4.4. Discussion individual performance NNM-H obtains the best results for the IB prediction technique with both Min and Avg aggregations, This section focuses on the verification of the hypotheses pre and for UB+min sented in Section 1 by analysing the experimental results. The results obtained determine that Hi is rejected, managing the nat- ural noise only in the group ratings is enough to obtain improve- 4.3. Results in rating aggregation-based GRS ments on a given GRS. On the other hand, H2 is accepted, man aging the natural noise in the entire ratings database, disregard This section presents the results of the NNM approaches applied ing the groups, improves the group recommendation. Also, H3 is to rating aggregation-based GRS. Similarly to the previous section, accepted, managing the natural noise in the entire ratings database the results are presented in Table 4, regarding the aggregation and, after that, adding a second step that manages the natural approaches(avg and Min) and single-user RS (IB and UB). The best noise in the group ratings, improves the results as compared to NNM result for each configuration has been highlighted only applying a single step of nnm. the details of the analysis In general, the results show a similar performance of the performed to test the hypotheses are explained in the following NNM to the previous grs approach The improvements that each sections 8 J. Castro et aL./ Support Systems 94(2017)1-11 Table 4 MAE results for the rating aggregation approach Dataset Prediction technique Group size NNM-LL NNM-LG NNM-GG NNM-H Moviclens IB+ Avg 5 0.8664 0.8664 0.8657 0.8477 0.8473 0.8884 0.8712 0.8990 0.8990 0.8969 0.8797 IB+ Min 5 0.8877 0.887 0.8808 0.8674 0.8617 10 09563 0.9559 0.9387 0.9347 0.9197 10252 09942 0006 UB t Avg 10 08101 08101 0.8102 0.7913 0.7912 15 0.8133 08133 0.8135 0.7937 0.7938 UB Min 0.8019 0.8017 0.8013 0.7820 0.7818 10 08139 0.8136 0.7943 15 0.8188 08188 8186 0.7983 0.7983 Netflix IB+ Avg 5 0.8382 08381 0.8376 0.8330 08325 10 0.8637 0.8638 0.8624 08579 0.8566 0.8682 0.868 0.8663 0.8621 0.8604 IB+ Min 08689 08643 8628 0.8585 10 09351 0.9344 0.9234 0.9282 0.916 15 09809 0.9801 0.9630 0.9738 0.9564 UB + Avg 5 0.8092 0.8094 0.8092 0.8025 0.8027 0.8221 0.8222 0.8152 15 0.8178 0.8178 8111 0.8110 UB+ Min 0.8143 0.8143 0.8134 0.8075 0.8067 10 08267 0.8269 0.8265 0.8203 0.8198 0.8226 0.8225 0.8223 0.8162 0.8158 4.4.1. H1: managing natural noise in the group ratings only would NNM-H with the baseline reject the equality of the results. Thes improve the grs results show that the global nnm approaches improve the group The h1 hypothesis has been tested by analysing the results of recommendation across different group aggregation and prediction the local level approaches: NNM-LL and NNM-LG compared with techniques in both datasets evaluated the baseline. In general the local NNM provides narrow improve Therefore, the hypothesis H2 is accepted The greater improve ments, although for the rating aggregation GRS with IB+ Min there ment achieved by NNM-GG and NNm-H over the baseline may occur are improvements when comparing NNM-LG with the baseline to because group recommendation with Cfrs depends not only on the test whether the results obtained are significant, the paired samples group ratings, but also on the collaborative information that t-test is applied to the results of the executions. This test compares remaining users provide. therefore, NNM at global level improves whether the differences found in the paired samples are statistically the predictions, and thus the grSs produce better recommendations. different. The test is applied to determine if the corresponding local NNM technique results are different to the baseline on each case. The 4.4.3. H3: managing natural noise in the entire ratings database and p-values for each of the cases tested are depicted in tables 5 and 6 for MovieLens 100k and Netflix Tiny datasets, respectively(NNM-LL after that, adding a second step that manages natural noise in the group ratings, would improve the results as compared to a single step of NNM and NNM-LG columns). The tests that were able to reject the equal The h3 hypothesis ha s been tested by Dy comparing the results ity with a confidence level of 95% have been highlighted. Although NNM-GG with NNM-H. In general, the NNM-Hi approach improves some of the results show statistical differences between the baseline the results of the NNM-GG approach. To test whether the improve and NNM-LL and nnm-Lf their improvements are limited to spe ments obtained are significant, the results have been statistically cific cases. In the case of NNM-LL, results improve for both datasets tested. The results of the paired samples t-test to compare NNM-GG for rating aggregation with IB+Min and for recommendation aggre and nnm-h on the different configurations are depicted in table 7 gation with IB+Avg. In the case of NNM-LG, results improve for both datasets with the ib prediction technique The tests show that the techniques present statistically significant improvements in general Therefore, hypothesis H1 is rejected as it only shows statistically Specifically, analysing the results in detail, it is clear that the significant improvements in a few cases. Hence, the application of NNM-H approach shows improvements compared to NNM-GG in local based techniques does not provide significant improvements to all the configurations with the prediction technique IB, in both the group recommendation datasets and in both rating and recommendation aggregation. If we focus on the results for UB prediction technique, NNM+H provides 4.4.2. H2: managing natural noise in the entire ratings database, improvements in recommendation aggregation with UB+Min disregarding the groups, would improve the group recommendation. Therefore, H3 is accepted, for IB prediction techniques and The H2 hypothesis has been tested by analysing the results of UB+min with recommendation aggregation. Consequently, the best NNM-GG and NNM-H approaches compared with the baseline. In approach for managing natural noise in group recommendation is general, the proposals improve the results of the baseline in terms the nnm-H approach overall with ib prediction technique. of mae by around 0.02. To test whether the results obtained are sig nificant, the results are tested similarly to Hi and the p-values are 4.5. Complexity and deployment depicted in Tables 5 and 6 for MovieLens and Netflix Tiny datasets respectively (NNm-Gg and NNm-H columns ). The tests that were According to previous section NNM-H performs better than NNM able to reject the equality with a confidence level of 95% have GG for IB prediction techniques. Hence we focus on the deployment been highlighted All the statistical tests that compare NNM-GG and of a real-world grs with NNM-H and iB prediction, taking two J. Castro et aL./ Decision Support Systems 94(2017)1-11 Table 5 Paired samples t-test p-values to compare each natural noise management technique with the baseline on Movie lens 100k dataset Dataset Group aggregation Prediction technique Group size NNM-LL NNM-LG NNM-GG NNM-H Moviclcns Rating IB Avg 0.1573 <0.0001 <0.0001 <00001 505505 0.1770 <00001 0.0001 <00001 0.6921 <0.0001 <00001 0.001 IB+ Min 0.0001 <0.0001 0.0001 <00001 0.0001 0.0001 ∠0.0001 0.0001 <D.0001 <0.0001 <0.0001 <D0001 UB Avg 55055 0.9587 0.9225 <0.0001 0.0245 C.2663 0.0001 0.0001 1 0.1083 0.0013 0.0001 00001 UB+ Min 0.349 0.0774 <0.00U1 <U0001 0.1861 0.2058 <0.0001 <00001 0.7174 C.0922 ∠0.0001 0.0001 Recommendation IB+ Avg 0.0134 <0.0001 0.0001 <0.0001 10 <0.0001 <0.0001 <0.0001 <00001 0.0005 <0.0001 0.0001 0.0001 IB+ Mil 0.0005 <0.0001 <0.0001 0.0001 0.2581 <00001 <00001 <00001 UB Avg 055055 1 0.1190 <0.0001 <0.0001 <0000 0.6047 0.1038 0.0001 00001 09303 0.0304 <0.0001 0.7342 0.0096 <0.0001 <00001 UB Min 00275 <0.0001 <0.0001 <00001 0.0003 <0.0001 <0.0001 0000 <D.0001 <0.0001 <0.0001 <D0001 important issues into account: i the complexity order of the NNM-H recommendation phase. First the complexity order of th approach and (ii the update frequency of the IB recommenda NNM-GG approach is studied which is composed of the rating ion model. Consequently this section studies the complexity of the detection and the rating correction NNM-H approach and provides some advice on its deployment in a iB prediction the notations are based on the ones introduced in Appendix A Rating detection: the ratings are evaluated to detect the noisy ones. The rating detection complexity order is O(m p, being p= max(Ru:. .d) 1. Complexity order of NNM-H approach Rating correction: a correction is computed for each noisy The NNM-H approach consists of applying NNM-GG before rating using UKNN. If the noisy ratings of each user are the IB model building and applying NNM-LG before the grouped to be corrected (optimisation over Algorithm 4), Table 6 Paired samples t-test p-valucs to comparc cach natural noisc managcmcnt tcchniquc with thc basclinc on Netflix Tiny datasct. Dataset Group aggregation Prediction technique Group size NNM-I NNM-LG NNM-GG NNM-H Nettle Rating IB Avg 5 0.3459 0.0104 <0.0001 0.0001 Tiny 0.3524 0.0001 0.000 <0.0001 00001 min 0.0040 <0.0001 <0.0001 00001 <0.0001 <0.0001 <0.000 00001 0.0001 <0.0001 0.000 0.0001 UB+ Avg 0.0001 00001 0.1968 0.5745 <0.0U01 <D.0001 0.6876 0.9641 <0.0001 UB+ Min 0.5833 0.1541 0.000 0.0001 0.0154 0.6594 0.0001 0.3047 Recommendatio IB+ Av 0.0112 0.0001 <0.0001 <0.0001 0.0002 <0.0001 <0.000 <D.0001 0.0002 0000 <0.0001 IB Min 0.0001 0.0001 10 0.000 15 0.3985 0.0001 <0.0001 <0.0001 UB Avg 0.0635 0.1132 <0.000 <0000 0.9263 0.4457 0.0001 <00001 0.3929 <D000 B+ Min 0.8274 0.0894 <0.0001 <00001 0.0033 15 0.4580 0.0997 0.0001 00001 10 J. Castro et al./Decision Support Systems 94(2017)1-11 Table 7 5. Concluding remarks and future works Paired samples t-test p-values to compare NNM-GG with NNM-H Group aggregation This paper presents four approaches to manage and correct the Recommendation natural noise in group recommender systems. The results show Rating that nnm over the group ratings provides slight improvements to Prediction Group MovieLens Netflix MovieLens Netflix the group recommendation performance. On the other hand, when tech 00k 100k Ti the nnm is applied to the entire dataset, it clearly increases the IB+ Avg 0.0013 0.0136<0.0001 <0.0001 performance of the group recommender systems. The best result 0 <0.0001 <0.0001<0.0001 0.0001 were obtained by the NNM-H approach, which performs a cascade 15 0.0001 00001<0.0001 <0.000 hybridisation of the global and local approaches, i.e., it manages the 5 <0.0001 <0.0001<0.0001 <0.000 10 <0.0001 <00001<0.0001 0.0001 natural noise first at a global level(entire dataset)and then, man 5 <00001 <0.0001<0.0001 <0.0001 ages the natural noise with the corrected data base at a local level UB Avg 5 0.2652 0.56640.1273 0.4065 group ratings). The results show clear improvements in the case 0.3890 0.6813 0.1118 0.6998 study developed 0.0466 0.6162 0.5110 0.4616 In the future we will explore the use of fuzzy tools to provide a 0.2169<0.0001 0.3520 10 0.5637 0.3818 0.0001 0.0040 better representation of the group information with a better under- 15 0.7230 0.1835<0.0001 0.0250 standing of the group information, a more effective and flexible nnm can be performed, leading to improvements in the group recommen dations. In addition the role of nnm in cold -start recommendation scenarios will be explored then the UKnn neighbour computation, which is the costly Acknowledgments part of the correction, is done once per user. Therefore the complexity of the rating correction for one user is o(mp This research work was partially supported by the research Project TIN-2015-66524-P, the Spanish FPU fellowship(FPU13101151), and also the Eureka SD Project(agreement number 2013-2591). Given that m p[21, the complexity order of NNM-GG is O(m ). Now it is necessary to study the complexity of NNM-LG Appendix A. Notation used in CFRS and grS In a similar way: Rating detection: its complexity order depends on the num- To formalize the problem behind CFRSs and grs we use ber of ratings given by the group, therefore it is O(r.p), being notation: mcxm;· Rating correction: the correction is done similarly to nNM U=u1,., um) is the set of users GG. The difference is the necessity of computing the knn I=(il,.., in is the set of items. neighbours once per group member (optimisation over RCUx Iis the set of known ratings Algorithm 2 for NNM-LG), therefore the rating correction Tuin. E R is the rating given by user u over item iK. complexity is o(m p) for one group member. Rui. are the ratings given by user u Ri,e are the ratings over item ik. Ga =(m1,., mr) C G is the target group. Ga members are Therefore the complexity order of NNM-LG is O(r*(p+ m* p)) noted by aliases, m1,..., mr. A user may belong to several Given that m >>p[21, NNM-LG complexity order is O(r.m) groups, being g the set of all groups Summarising, the complexity of NNM-H is o(m )in the model RGu are the ratings provided by the members of ga computation and o(r.m) in the recommendation computation RGain are the ratings provided by ga members over item ik. 2. Deployment of a GRS with NNM-H and iB prediction Cu;and cin is the class of uj and ik, respectively The grss based on iB prediction generate a model to com R,Ru ., and RG. is the result of the NNM over the correspond pute the recommendations, which have a complexity order Ing rating set. O(n)[21]. In a deployed RS, the item based model is calculated offline and updated with certain frequency [30, typically daily Table a8 clarifies the usage of the notation over the commonly In this case, NNM-H integration is straightforward used ratings table. For the sake of clearness, the group members are On the other hand, there are domains in which complete model updates are not affordable. this happens in domains that show a high data variation, such as advertising or news recommen- Table a1 Notation used in the algorithms with and their respective interpretation in terms of dation, or in domains where the volume of the data results the set ofelements that they refer to in expensive model computation. Such problems have been addressed with incremental models [19, in which new rat- ings trigger partial updates of the model. In this case the nnm should also be applied incrementally when a new rating rui is rujin added it might be classified as noise and therefore corrected In the unusual case where this rating changes the user or item ui classification [29], all the ratings in the corresponding user or item columns should be checked. all these tasks could be R performed with a low computational cost In short we can conclude that the advantages of NNM-H in grSs clearly outweigh the time and computing costs it requires P1 rumin

...展开详情
试读 11P An empirical study of natural noise management in group recommendation systems
img
  • GitHub

    绑定GitHub第三方账户获取

关注 私信 TA的资源

上传资源赚积分,得勋章
    最新推荐
    An empirical study of natural noise management in group recommendation systems 16积分/C币 立即下载
    1/11
    An empirical study of natural noise management in group recommendation systems第1页
    An empirical study of natural noise management in group recommendation systems第2页
    An empirical study of natural noise management in group recommendation systems第3页
    An empirical study of natural noise management in group recommendation systems第4页

    试读已结束,剩余7页未读...

    16积分/C币 立即下载 >