Peter
Peter

Reputation: 11

Solving multi-armed bandit problems with continuous action space

My problem has a single state and an infinite amount of actions on a certain interval (0,1). After quite some time of googling I found a few paper about an algorithm called zooming algorithm which can solve problems with a continous action space. However my implementation is bad at exploiting. Therefore I'm thinking about adding an epsilon-greedy kind of behavior.

Is it reasonable to combine different methods?

Do you know other approaches to my problem?

Code samples:

import portion as P
def choose_action(self, i_ph):
    # Activation rule
    not_covered = P.closed(lower=0, upper=1)
    for arm in self.active_arms:
        confidence_radius = calc_confidence_radius(i_ph, arm)
        confidence_interval = P.closed(arm.norm_value - confidence_radius, arm.norm_value + confidence_radius)
        not_covered = not_covered - confidence_interval
    
    if not_covered != P.empty():
        rans = []
        height = 0
        heights = []
        for i in not_covered:
            rans.append(np.random.uniform(i.lower, i.upper))
            height += i.upper - i.lower
            heights.append(i.upper - i.lower)
        ran_n = np.random.uniform(0, height)
        j = 0
        ran = 0
        for i in range(len(heights)):
            if j < ran_n < j + heights[i]:
                ran = rans[i]
            j += heights[i]
        self.active_arms.append(Arm(len(self.active_arms), ran * (self.sigma_square - lower) + lower, ran))

    # Selection rule
    max_index = float('-inf')
    max_index_arm = None
    for arm in self.active_arms:
        confidence_radius = calc_confidence_radius(i_ph, arm)

        # indexfunction from zooming algorithm
        index = arm.avg_learning_reward + 2 * confidence_radius

        if index > max_index:
            max_index = index
            max_index_arm = arm
    action = max_index_arm.value
    self.current_arm = max_index_arm
    return action

def learn(self, action, reward):
    arm = self.current_arm
    arm.avg_reward = (arm.pulled * arm.avg_reward + reward) / (arm.pulled + 1)
    if reward > self.max_profit:
        self.max_profit = reward
    elif reward < self.min_profit:
        self.min_profit = reward

    # normalize reward to [0, 1]
    high = 100
    low = -75
    if reward >= high:
        reward = 1
        self.high_count += 1
    elif reward <= low:
        reward = 0
        self.low_count += 1
    else:
        reward = (reward - low)/(high - low)

    arm.avg_learning_reward = (arm.pulled * arm.avg_learning_reward + reward) / (arm.pulled + 1)
    arm.pulled += 1

# zooming algorithm confidence radius
def calc_confidence_radius(i_ph, arm: Arm):
    return math.sqrt((8 * i_ph)/(1 + arm.pulled))

Upvotes: 1

Views: 2123

Answers (1)

reim
reim

Reputation: 612

You may find this useful, full algorithm description is here. They grid out the probes uniformly, informing this choice (e.g. normal centering on a reputed high energy arm) is also possible (but this might invalidate a few bounds I am not sure).

Upvotes: 1

Related Questions