没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
内容概要:本文探讨了数据驱动优化方法应用于泊位分配问题(Berth Allocation Problem, BAP)的潜力,特别是在解决由于外部因素(如横风和海流)导致的实际船舶到达时间不确定的情况下,提出了四款不同类型的机器学习模型——线性回归到人工神经网络,进行船舶到达时间预测,并对各模型的预测精度进行了比较与评估。引入动态时间缓冲器(Dynamic Time Buffers, DTBs),该时间缓冲由自动化识别系统(Automatic Identification System, AIS)为基础提供的预报计算而来,以此增加最终生成之排班计划的稳健性和可靠性。实验数据显示,即便是较为简单的机器学习方法也可以提供高精度的预计结果,并证明了DTB可以显著减少实际等待时间并提高服务质量。 适合人群:对海运物流规划、港口操作管理以及应用数据分析解决现实运营挑战感兴趣的科研工作者、工程师和技术专家。 使用场景及目标:为航运公司、港口运营商及相关部门提供了一种有效的决策支持工具来制定更为精确合理的靠港时间表,优化码头设施利用率,增强服务水平。 其他说明:研究不仅展示了如何利用先进的算法改善运输行业的具体环节,同时也强调了在实际应用中考虑到外部干扰条件的重要性,为后续相关领域的进一步探索提供了宝贵的借鉴经验。
资源推荐
资源详情
资源评论
Vol.:(0123456789)
Flexible Services and Manufacturing Journal (2023) 35:29–69
https://doi.org/10.1007/s10696-022-09462-x
1 3
Robust berth scheduling using machine learning forvessel
arrival time prediction
LorenzKolley
1
· NicolasRückert
1
· MarvinKastner
2
· CarlosJahn
2
·
KathrinFischer
1
Accepted: 1 August 2022 / Published online: 1 September 2022
© The Author(s) 2022
Abstract
In this work, the potentials of data-driven optimization for the well-known berth
allocation problem are studied. The aim of robust berth scheduling is to derive con-
flict-free vessel assignments at the quay of a terminal, taking into account uncer-
tainty regarding the actual vessel arrival times which may result from external
influences as, e.g., cross wind and sea current. In order to achieve robustness, four
different Machine Learning methods-from linear regression to an artificial neural
network-are employed for vessel arrival time prediction in this work. The different
Machine Learning methods are analysed and evaluated with respect to their forecast
quality. The calculation and use of so-called dynamic time buffers (DTBs), which
are derived from the different AIS-based forecasts and whose length depends on the
estimated forecast reliability, in the berth scheduling model enhance the robustness
of the resulting schedules considerably, as is shown in an extensive numerical study.
Furthermore, the results show that also rather simple Machine Learning approaches
are able to reach high forecast accuracy. The optimization model does not only lead
to more robust solutions, but also to less actual waiting times for the vessels and
hence to an enhanced service quality, as can be shown by studying the resulting
schedules for real vessel data. Moreover, it turns out that the accuracy of the result-
ing berthing schedules, measured as the deviation of planned and actually realisable
schedules, exceeds the accuracy of all forecasts which underlines the usefulness of
the DTB approach.
Keywords Berth allocation problem· Machine learning· Uncertainty· (Arrival
time) prediction· Robust optimization
* Lorenz Kolley
Lorenz.kolley@tuhh.de
1
Institute forOperations Research andInformation Systems, Hamburg University ofTechnology,
Am Schwarzenberg-Campus 4, 21073Hamburg, Germany
2
Institute ofMaritime Logistics, Hamburg University ofTechnology, Am
Schwarzenberg-Campus 4, 21073Hamburg, Germany
30
L.Kolley et al.
1 3
1 Introduction
A highly relevant issue in the planning of (maritime) supply chains and logistics
is the uncertainty of deep sea vessel arrivals at ports, as its consequences affect
all further stages of the supply chain (Dobrkovic etal. 2016). However, the port
terminals are affected most: Berth allocations have to be planned in advance, i.e.,
while a vessel has not yet arrived and hence either a berth can still be occupied
when a vessel arrives early (and the vessel cannot moor as planned) or a berth
remains empty when a vessel is delayed. Both effects have negative economic
consequences, for the terminal as well as for the vessel owners and, as mentioned
above, also for the subsequent supply chain. Therefore, approaches for reducing
uncertainty in this field are of huge importance.
As data and digitalization offer new opportunities for uncertainty reduction
and mitigation, new approaches for terminal planning and management should
be developed from a “data-driven perspective,“ as Heilig etal. (2020) point out,
in order to enrich the traditional “optimization perspective”. However, currently
these authors still identify “a lack of data-driven approaches in the context of
container terminals.“ Moreover, they point out that data from external sources—
as, e.g., Automated Information System (AIS) data-are “under-analysed”, i.e.,
that often they are not yet fully exploited; instead, the focus is often still on
operational optimization, without sufficient consideration of patterns in existing
data. According to Yang etal. (2019), Data Mining-for extraction of the relevant
information-and vessel behaviour analysis-in order to find patterns of maritime
traffic-are of high importance for future research in the field, as, e.g., causality
analysis builds up on it. They point out that there is only little work in Operations
Research (OR) yet which makes use of AIS data and that berth allocation might
be one field which could strongly benefit from their exploitation in the future.
Therefore, this work is aimed at filling this gap with respect to a specific prob-
lem by developing and presenting a new machine-learning-based approach for
forecasting vessel arrival times. The results are then used in a berth allocation
approach, in order to improve the “traditional” berth allocation problem (BAP)
optimization procedure by an appropriate use of existing data. The forecasting
approach makes use of AIS data which are broadcast by vessels and, among other
information, provide the current position of a vessel on a regular basis. Following
Dobrkovic etal. (2016), data from 48h before the actual vessel arrivals are used
for forecasting, as this is the time frame within which vessel arrivals are usually
announced to the respective seaport container terminal.
Through the use of real AIS data from the past on which the forecasts are
based, but for which the actual vessel arrival times are already known as well,
the approach taken in this work allows an evaluation of the forecast quality of
the different forecasting methods. In the forecasting, methods from the field of
machine learning (ML) are used, namely linear regression (LR), k-nearest neigh-
bour (kNN), decision tree regressor (DTR) and artificial neural networks (ANN).
A berthing schedule is called (feasibility-)robust, when it remains stable at
least for smaller deviations from the originally assumed vessel arrival times, i.e.,
31
1 3
Robust berth scheduling using machine learning forvessel…
when the schedule stays feasible (Scholl 2001). To derive a robust berth allo-
cation based on the forecasts, the concepts of conflicts (Liu etal. 2017) and of
dynamic time buffers (DTBs) are used (Kolley et al. 2021) and further refined.
More specifically, Liu etal. (2017) define and measure robustness as a function
of the service level that is achieved, where the service level in turn is defined as
the number (or percentage) of vessels which-in the planned berth allocation-are
not in conflict with other vessels. In this work, this customer-oriented concept is
extended by the integration of time buffers which help to better prevent conflicts
even if actual vessel arrival times deviate from scheduled berthing times. Hence,
a robust schedule is defined as a schedule with as little conflicts as possible which
remains valid under changing arrival times.
The purpose of the approach taken in this work is to reduce uncertainty and thus,
increase robustness by means of using different forecasts-resulting from different ML
methods as mentioned above-jointly in the berth scheduling. The forecasts are inter-
preted as possible future arrival times which are equally likely, hence uncertainty is
transformed into risk. The DTBs are then constructed based on the resulting “forecast
distributions”. The more the forecasts differ, i.e., the more spread there is in the dis-
tribution, the higher is the uncertainty with respect to the respective arrival time and
hence, a larger buffer is needed for mitigating this uncertainty. On the contrary, with
very similar forecasts, uncertainty can be assumed to be low (which is why all methods
lead to similar results), there is little spread and hence, only a small buffer is necessary.
The aim of the BAP approach then is to avoid conflicts, i.e., overlaps, not only for the
planned berths, but also with respect to the time buffers, in order to enhance the sched-
ule’s robustness.
For this purpose, the BAP model using DTBs from Kolley etal. (2021) is further
modified and improved in this work. It is then applied to the forecast data. As the
respective model builds up on the model of Liu etal. (2017), on their scenario-based
approach and their robustness definition, the berth allocations which are derived using
the model presented below are compared to the results from Liu etal.’s model in order
to study the impact of the DTBs and the other changes made. The differences of the
models are discussed in detail in Sect.4.3.
The approach chosen in this work-the combination of AIS data exploration for fore-
casting, the subsequent use of an optimization approach which is based on the respec-
tive forecasts and the judgement of the quality of the results based on actual vessel
arrivals-is, to the authors’ knowledge, unique, even if some publications use AIS data
for forecasting. The contributions of this work hence are manifold:
a) It is discussed and shown how AIS data can be used for arrival time forecasting
and how the data sets can be cleaned and prepared beforehand. As Heilig etal.
(2020) point out, such “methodological insights regarding data preparation (e.g.,
data cleansing, feature selection) […] and model evaluation, are not discussed in
great detail in literature”; hence, this fills an important gap.
b) The different ML methods which are applied in this work are studied with respect
to their forecast quality in order to judge which of them might be most appropriate
for vessel arrival time prediction.
32
L.Kolley et al.
1 3
c) It is shown how the forecast data can be used to derive more robust berth alloca-
tions by applying the concept of DTBs.
d) Moreover, the BAP model from Kolley etal. (2021) is further refined such that
the solutions are of more practical use and relevance. This is shown by comparing
and benchmarking the resulting solutions with the model of Liu etal. (2017). The
latter model is used as a benchmark as its robustness concept, which concentrates
on the service level achieved in terms of vessels that can be served without con-
flict, is the concept which, with further refinements, is also used in this work.
e) The use of real data moreover allows for studying the true service level, i.e., the
resulting service level considering the real – not only the predicted – arrival times
of the vessels. Hence, the evaluation of the BAP models has high practical (and
not only theoretical) relevance.
In summary, the aim of this study is therefore to develop and examine a new
procedure that generates robust berthing plans by predicting vessel arrival times by
combining ML and OR methods. However, it has to be noted that increased robust-
ness can lead to losses in efficiency, as these two objectives are in conflict: More
robustness can be achieved by reserving more time for each vessel, but then the
quay’s capacity might not be fully exploited and there will be idle times, impeding
efficiency.
The remainder of this paper is structured as follows: Literature on trajectory fore
casting with ML in maritime logistics as well as on the BAP is reviewed in Sect.2.
Data pre-processing and the results of the ML algorithms are discussed in Sect.3. In
Sect.4, the mathematical model for the robust continuous berth allocation problem
with dynamic time buffers (ro-DTB-BAPc) is presented. The results of the numeri-
cal experiments are given and evaluated in Sect.5. Finally, the paper is concluded
by a critical discussion of the results, a summary and an outlook in Sect.6.
2 Literature review
2.1 AIS Data andmachine learning inmaritime logistics
AIS was developed in the 1990´s in order to improve maritime safety. Vessels
regularly broadcast information via AIS, which can be split into the categories of
static (e.g., vessel identity and size), dynamic (vessel’s position, speed etc.) and
voyage-related information (destination port and ETA), see Yang etal. (2019).
Yang etal. (2019) give an overview on maritime problems in which AIS data
are relevant and they review different AIS data applications. In their review, Yang
etal. state that AIS data were first mostly used in trajectory extraction and cluster-
ing for studies on navigation safety and the avoidance of vessel collisions, which
they call “basic applications”, but over time also “extended” and “advanced”
applications in other areas of maritime research were developed. In their litera-
ture review, the authors find a strongly increasing number of publications on such
AIS data applications in particular in the more recent past, i.e., since 2015.
33
1 3
Robust berth scheduling using machine learning forvessel…
More involved (extended) problem fields in which AIS data were successfully
used are vessel behaviour analysis, e.g., with respect to fishing or to investigate
navigation patterns or travel times and environmental evaluations, e.g., the analy-
sis of emissions and, subsequently, the optimization of sailing speed in order to
reduce such emissions.
AIS data are also used to analyse, in particular, activities at and near ports. For
example, Wu etal. (2020) discuss quality issues of AIS data and in particular the
problems these data can present with respect to identifying ports, i.e., how a “port
event” and the fact if a vessel is actually at a port can be identified from AIS data.
Feng etal. (2020) also use AIS data to derive trajectories in ports, while Chen
etal. (2020) exploit AIS data to identify the services and analyse the characteris-
tics of tugboat activities in ports. Franzkeit etal. (2020) investigate vessel waiting
times at ports based on AIS data.
ML recently has been applied in Maritime Logistics in very different ways and
to different planning problems. A structured review of the literature in this field is
given by Dornemann et al. (2020); these authors also present an in-depth discus-
sion of the combination of OR and ML methods. Moreover, Filom etal. (2022) pre-
sent an extensive review of ML applications in ports. Therefore, just a few selected
recent examples on work relating to berthing and the BAP are given below:
Li and He (2020, 2021) predict liner berthing times using deep learning. While
Li and He (2020) present a more basic discussion of their approach, the data pre-
processing and some preliminary results, their procedure is refined in the 2021 paper
where it is shown that feature extraction has a huge impact on prediction results. The
use of Deep Learning and an ANN for automatic berthing systems are suggested by
Lee etal. (2020), who combine AIS data with actual ML. The authors use nine dif-
ferent supervised ML methods for “predicting the risk range of an unsafe berthing
velocity when a ship approaches a port”, i.e., they study a very specific problem.
De Leon et al. (2017) employ ML for algorithm selection, i.e., for a meta-
learning problem, to solve the Bulk Carrier BAP. By their approach, the best
algorithm for the problem setting at hand can be determined, after the meta-algo-
rithm has been trained on different problem settings and the relevant features hav-
ing an influence on algorithm performance have been found. In contrast, Chei-
manoff etal. (2021) use ML for parameter tuning of the hyper-parameters that are
used in meta-heuristics for the Bulk Carrier BAP.
With respect to the topic of this work, it should be mentioned that De Leon etal.
(2017) as well as Cheimanoff etal. (2021) and many other authors in the field use
randomly generated vessel arrival times in their computational studies, i.e., no real
data are used and no arrival time prediction is employed, even in the most recent
publications. Consequently, no comparison with actual vessel arrival times can be
made, as it is the case in this work.
2.2 Trajectory forecasting
Many studies in the field of maritime logistics estimate the time needed for given
routes under the assumption that the vessels sail at a known and given speed (Grida
剩余40页未读,继续阅读
资源评论
pk_xz123456
- 粉丝: 2863
- 资源: 4045
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 基于MATLAB 使用模糊逻辑算法控制给定交叉口的红绿灯系统
- android-19安卓操作系统版本8
- springboot医院病历管理系统--论文-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- springboot在线小说阅读平台_0hxfv-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- springboot智慧生活分享平台62(源码+sql+论文)-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- springboot智能菜谱推荐系统_ct3p7--论文-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 毕业生信息招聘平台-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 大学生创新创业训练项目管理系统设计与实现-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 大健康养老公寓管理系统_to14d-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 复现一篇国内中文核心,改进的DSOGI-PLL锁相环 能够对含有电压直流分量或者是含有高次谐波 都能够锁定电压基波频率50HZ
- 大学新生报到系统的设计与实现-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 大学生社团活动平台-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 儿童性教育网站-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 点餐平台网站-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 个性化电影推荐系统-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 高校学生饮食推荐系统_02187-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功