Vincze, DávidKovács, Szilveszter2015-03-182015-03-182015-03-182064-9622http://hdl.handle.net/2437/208337The method called Fuzzy Rule Interpolation-based Q-learning (FRIQ-learning for short) uses a fuzzy rule interpolation method to be the reasoning engine applied within Q-learning. This method was introduced previously by the authors along with a rule-base construction extension for FRIQ-learning, which can construct the requested FRI fuzzy model from scratch in a reduced size, implementing an incremental creation strategy. The rule-base created this way will most probably contain only those rules which were significant during the construction process, but have no important role in the final rule-base. Also there can be rules which became redundant (can be calculated by using fuzzy rule interpolation) thanks to another rule in the finished rule base. The goal of the paper is to introduce possible methods, which aim to find and remove the redundant and unnecessary rules from the rule-base automatically by using variations of newly developed decremental rule base reduction strategies. The paper also includes an application example presenting the applicability of the methods via a well known reinforcement learning example: the cart-pole simulation.enNevezd meg! - Ne add el! - Ne változtasd! 2.5 MagyarországFRIQ-learningreinforcement learningrule-base reductionfuzzy rule interpolationRule-base reduction in Fuzzy Rule Interpolation-based Q-learningArticle10.17667/riim.2015.1-2/10.No. 1-2.Vol. 2. (2015).