{"id":52,"date":"2010-06-30T14:01:50","date_gmt":"2010-06-30T14:01:50","guid":{"rendered":"http:\/\/fs-s-wpmu-02.facsci.ualberta.ca\/rlai\/?page_id=52"},"modified":"2020-01-06T19:11:21","modified_gmt":"2020-01-06T19:11:21","slug":"parameter-free-step-size-adaptation","status":"publish","type":"page","link":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/projects\/parameter-free-step-size-adaptation\/","title":{"rendered":"Parameter-free Step-size Adaptation"},"content":{"rendered":"<p>Many of the learning algorithms used in the project have parameters that must be tuned manually for good performance; we are always looking for ways that they can be set automatically. Foremost among these parameters is the step-size parameter of stochastic gradient-descent algorithms. Several methods have been proposed for automatically setting step-size parameters, but unfortunately all of them have at least one parameter of their own, and this meta-parameter must generally be tuned manually to the particular problem, thereby limiting the benefit of these methods. This year we have begun a subproject to develop a superior step-size algorithm with no parameters or meta-parameters, that is, that can be applied with no domain knowledge other than what is needed to formulate a problem as gradient descent.<\/p>\n<p>So far we have shown that all previous step-size methods involve at least one meta-parameter and that there is no single setting of the meta-parameters that produces acceptable performance on all tasks. We are focusing in particular on a family of methods related to an algorithm known as K1, previously developed by Sutton, which performs best of the existing methods but is still sensitive to the meta-parameter. We have developed a new algorithm that we call Normalized K1 and which performs well, without tuning, over a much wider range of problems. So far we have tested Normalized K1 on a range of artificial problems. Next we will stress it more severely by applying it to data from the Critterbot.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Many of the learning algorithms used in the project have parameters that must be tuned manually for good performance; we are always looking for ways that they can be set automatically. Foremost among these parameters is the step-size parameter of stochastic gradient-descent algorithms. Several methods have been proposed for automatically setting step-size parameters, but unfortunately<\/p>\n<p><a href=\"https:\/\/spaces.facsci.ualberta.ca\/rlai\/projects\/parameter-free-step-size-adaptation\/\" class=\"more-link themebutton\">Read More<\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"parent":34,"menu_order":14,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-52","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/pages\/52","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/comments?post=52"}],"version-history":[{"count":1,"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/pages\/52\/revisions"}],"predecessor-version":[{"id":1145,"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/pages\/52\/revisions\/1145"}],"up":[{"embeddable":true,"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/pages\/34"}],"wp:attachment":[{"href":"https:\/\/spaces.facsci.ualberta.ca\/rlai\/wp-json\/wp\/v2\/media?parent=52"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}