• 分步实现搜索

    This note considers multi-agent systems seeking to optimize a convex ag-gregate function. We assume that the gradient of this function is distributed, mean-ing that each agent can compute its corresponding partial derivative with informa-tion about its neighbors and itself only. In such scenarios, the discrete-time imple-mentation of the gradient descent method poses the basic challenge of determining appropriate agent stepsizes that guarantee the monotonic evolution of the objective function. We provide a distributed algorithmic solution to this problem based on the aggregation of agent stepsizes via adaptive convex combinations. Simulations illustrate our results.

    0
    48
    168KB
    2015-08-27
    1
关注 私信
上传资源赚积分or赚钱