Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR PREDICTING AVERAGE INVENTORY WITH NEW ITEMS
Document Type and Number:
WIPO Patent Application WO/2020/056286
Kind Code:
A1
Abstract:
An example method for predicting average inventory with newly added items can include: aggregating sales data of a plurality of items, the items comprising training items and new items; identifying, using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module; obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items; calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold; obtaining an ultimate turn prediction for each new item by averaging the set of turn predictions; and predicting an average inventory for each new item based on the ultimate turn prediction.

Inventors:
MAHALANOBISH OMKER (IN)
Application Number:
PCT/US2019/051047
Publication Date:
March 19, 2020
Filing Date:
September 13, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
WALMART APOLLO LLC (US)
International Classes:
G06Q10/08
Foreign References:
US20150032502A12015-01-29
US20040260600A12004-12-23
Other References:
FERREIRA ET AL.: "Analytics for an online retailer: Demand forecasting and price optimization", MANUFACTURING & SERVICE OPERATIONS MANAGEMENT, 13 November 2015 (2015-11-13), Retrieved from the Internet [retrieved on 20191103]
BODENSTAB: "Demand Forecasting for New Product Introductions", TOOLSGROUP, 29 November 2016 (2016-11-29), Retrieved from the Internet [retrieved on 20191103]
JILL: "Ultimate Guide to Inventory Forecasting", INVENTORY PLANNER, 9 August 2018 (2018-08-09), XP055695186, Retrieved from the Internet [retrieved on 20191103]
Attorney, Agent or Firm:
KAMINSKI, Jeffri A. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer- implemented method, comprising:

aggregating, by a processor of a computing device, sales data of a plurality of items in a category from a database, the items comprising training items and new items, the sales data of each of the items comprising attributes and an item-based Point-of-Sale (PoS) data;

identifying, by the processor using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module;

obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items;

calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold;

obtaining an ultimate turn prediction for each of the new items by averaging the set of turn predictions; and

predicting an average inventory for each of the new item based on the ultimate turn prediction.

2. The method of claim 1 , wherein the identifying the similar items using the first stage similarity module comprises performing a K-Nearest Neighbors to obtain the data set of the similar items.

3. The method of claim 2, wherein the identifying the similar items using the second stage similarity module comprises:

performing a 50-Nearest Neighbors algorithm for each of the new items using the attributes to obtain the data set of the similar items; and

performing a 30-Nearest Neighbors algorithm to further filter down the data set of the similar items by using the PoS data.

4. The method of claim 1 , wherein the identifying the similar items using the second stage classification module further comprises:

randomly choose 50% of the new items;

performing a 30-Nearest Neighbors algorithm for each of the new items using both the attributes and the POS data to identify the similar items;

predicting a turn value for each of the new items based on a data set of identified similar items;

comparing the turn value with an observed turn to obtain an error in a turn prediction for each of the new items; and

storing the data set of the similar items, their comparison turn values, and the error in turn prediction.

5. The method of claim 4, further comprising:

creating a decision tree based on comparisons of turn values of the similar items by iterating a number of simulations;

filtering out the similar items which lead to an erroneous prediction;

performing a 30-Nearest Neighbors algorithm on an entire data set of the new items; and

updating the similar items based on the decision tree.

6. The method of claim 1 , wherein the defining the target metrics for each of the new items further comprises calculating a weight mean of similarities on the data set of the similar items.

7. The method of claim 1 , wherein the turn predictions of the target metrics are obtained by performing linear regression on the data set of the similar items, and wherein the target metrics are dependent variables and the item-based PoS data are predictors.

8. The method of claim 1, wherein the dynamic threshold is 1.2 times of a minimum of the mean errors of the predictions.

9. The method of claim 1 , wherein the set of predefined rules comprises Gower’s distance measures.

10. A system for predicting a stock on hand for predefined markdown plans, comprising:

a processor; and

a non-transitory computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising:

aggregating, by a processor of a computing device, sales data of a plurality of items in a category from a database, the items comprising training items and new items, and the sales data of each of the items comprising attributes and an item-based Point-of- Sale (PoS) data;

identifying, by the processor using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module;

obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items;

calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold;

obtaining an ultimate turn prediction for each of the new items by averaging the set of turn predictions; and

predicting an average inventory for each of the new items based on the ultimate turn prediction.

11. The system of claim 10, wherein the identifying the similar items using the first stage similarity module comprises performing a K-Nearest Neighbors to obtain the data set of the similar items.

12. The system of claim 10, wherein the identifying the similar items using the second stage similarity module comprises:

performing a 50-Nearest Neighbors algorithm for each of the new items using the attributes to obtain the data set of the similar items; and performing a 30-Nearest Neighbors algorithm to further filter down the data set of the similar items by using the PoS data.

13. The system of claim 12, wherein the identifying the similar items using the second stage classification module further comprises:

randomly choose 50% of the new items;

performing a 30-Nearest Neighbors algorithm for each of the new items using both the attributes and the POS data to identify the similar items;

predicting a turn value for each of the new items based on a data set of identified similar items;

comparing the turn value with an observed turn to obtain an error in a turn prediction for each of the new items; and

storing the data set of the similar items, their comparison turn values, and the error in turn prediction.

14. The system of claim 13, wherein the identifying the similar items using the second stage classification module further comprises:

creating a decision tree based on comparisons of turn values of the similar items by iterating 200 simulations;

filtering out the similar items which lead to an erroneous turn prediction;

performing a 30-Nearest Neighbors algorithm on an entire data set of the new items; and

updating the similar items based on the decision tree.

15. The system of claim 10, wherein defining the target metrics for each of the new items further comprises calculating a weight mean of similarities on the data set of the similar items.

16. The system of claim 10, wherein the turn predictions of the target metrics are obtained by performing linear regression on the data set of the similar items, and wherein the target metrics are dependent variables and the item-based PoS data are predictors.

17. The system of claim 10, wherein the dynamic threshold is 1.2 times of a minimum of the mean errors of the predictions.

18. The system of claim 10, wherein the set of predefined rules comprises Gower’s distance measures.

19. A non-transitory computer-readable storage medium having instructions stored which, when executed by a computing device, cause the computing device to perform operations comprising:

aggregating, by a processor of a computing device, sales data of a plurality of items in a category from a database, the items comprising training items and new items, and the sales data of each of the items comprising attributes and an item-based Point-of- Sale (PoS) data;

identifying, by the processor using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module;

obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items;

calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold;

obtaining an ultimate turn prediction for each of the new items by averaging the set of turn predictions; and

predicting an average inventory for each of the new items based on the ultimate turn prediction.

20. The non-transitory computer-readable storage medium of claim 19, wherein identifying the data set of similar items using the second stage classification module comprises a plurality of operations, the plurality of operations comprises:

randomly choose 50% of the new items;

performing a 30-Nearest Neighbors algorithm for each of the new items using both the attributes and the POS data to identify the data set the similar items; predicting a turn value for each of the new items based on the data set of similar items;

comparing the turn value with an observed turn to obtain an error in a turn prediction for each of the new items; and

storing the data set of the similar items, their comparison turn values, and the error in turn prediction.

Description:
SYSTEM AND METHOD FOR

PREDICTING AVERAGE INVENTORY WITH NEW ITEMS

Cross-Reference to Related Applications

[0001] This patent application claims the priority to Indian Provisional Patent Application No.: 201811034765, filed September 14, 2018, and U.S. Provisional Application No.: 62/779,086, filed December 13, 2018, contents of which are incorporated by reference herein.

BACKGROUND

1. Technical Field

[0002] The present disclosure relates to inventory control, and more specifically to systems and methods for predicting average inventory with newly added items.

2. Introduction

[0003] New items are continuously introduced into a market almost every day. Before the new items are selected and included as a part of the store assortment in an inventory of retail stores, it is highly important to identify and manage an inventory for each of the new items, mainly to meet a budget and assortment constraints.

[0004] There is a need to conduct an inventory prediction and assortment selection for the new items and to determine when and how many new items can be added into the product assortment in order to satisfy demand of customers. Inventory prediction and

management may result in a perfect sale opportunity for each of the new items, thereby reducing inventory costs.

SUMMARY

[0005] An example computer-implemented method of performing concepts disclosed herein can include: aggregating, by a processor of a computing device, sales data of a plurality of items in a category from a database, the items comprising training items and new items, the sales data of each of the items comprising attributes and an item-based Point-of-Sale (PoS) data; identifying, by the processor using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module; obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items; calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold; obtaining an ultimate turn prediction for each of the new items by averaging the set of turn predictions; and predicting an average inventory for each of the new item based on the ultimate turn prediction.

[0006] A system for predicting average inventory with newly added items, the system comprising: a processor; and a non-transitory computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising: aggregating, by a processor of a computing device, sales data of a plurality of items in a category from a database, the items comprising training items and new items, the sales data of each of the items comprising attributes and an item-based Point-of-Sale (PoS) data; identifying, by the processor using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module; obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items; calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold; obtaining an ultimate turn prediction for each of the new items by averaging the set of turn predictions; and predicting an average inventory for each of the new item based on the ultimate turn prediction.

[0007] A non-transitory computer-readable storage medium having instructions stored which, when executed by a computing device, cause the computing device to perform operations including: aggregating, by a processor of a computing device, sales data of a plurality of items in a category from a database, the items comprising training items and new items, the sales data of each of the items comprising attributes and an item-based Point-of-Sale (PoS) data; identifying, by the processor using a set of predefined rules, a data set of similar items on the training items for each of the new items, the set of predefined rules comprising a first stage similarity module, a second stage similarity module, and a second stage classification module; obtaining target metrics for each of the new items, the target metrics being turn predictions from the data set of the similar items; calculating mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold; obtaining an ultimate turn prediction for each of the new items by averaging the set of turn predictions; and predicting an average inventory for each of the new item based on the ultimate turn prediction.

[0008] Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] Example embodiments of this disclosure are illustrated by way of an example and not limited in the figures of the accompanying drawings, in which like references indicate similar elements and in which:

[0010] FIG. 1 is a block diagram illustrating an example computing system in accordance with some embodiments of the present invention;

[0011] FIG. 2 is a flowchart diagram illustrating an example process for predicting an average inventory with new items in accordance with some embodiments; and

[0012] FIG. 3 is a block diagram an example computer system in which some example embodiments may be implemented.

[0013] It is to be understood that both the foregoing general description and the following detailed description are example and explanatory and are intended to provide further explanations of the invention as claimed only and are, therefore, not intended to necessarily limit the scope of the disclosure.

DETAILED DESCRIPTION

[0014] Various example embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. Throughout the specification, like reference numerals denote like elements having the same or similar functions. While specific implementations and example embodiments are described, it should be understood that this is done for illustration purposes only. Other components and configurations may be used without parting from the spirit and scope of the disclosure, and can be implemented in combinations of the variations provided. These variations shall be described herein as the various embodiments are set forth.

[0015] The concepts disclosed herein are directed to systems and methods of predicting an average inventory with newly added new items in order to obtain an optimal and efficient inventory assortment and maximize retail sales. As described in greater detail below, the system uses machine learning algorithms such as K-Nearest Neighbors (KNN) for data classification.

[0016] Generally, an item assortment selection process involves in the use of a turn rate. The turn rate or value for an item is defined as a ratio of the item sales amount to the item average inventory. The item sales amount is a multiplication of the average unit price and the Units sold Per Store Per Week (UPSPW) for an item over a comparative time frame. An average inventory for an item is defined as the average on hand inventory in retail dollars, at the stores of interest over a comparative time frame.

[0017] In some embodiments, a data set of similar items is obtained after running different modules or methods described below. A target metric is defined as a metric which can be predicted from the obtained data set of the similar items. Turn and inverse turn are two different target metrics and are considered in the systems and methods described herein.

[0018] In the systems of the present disclosure, it is assumed that the similar items in a category may have similar turns. Thus, a problem of conducting assortment selection for new items and obtaining inventory prediction can be solved by identifying a set of most similar items for each of the new items.

[0019] FIG. 1 is a block diagram illustrating an example computing system 100 in which some example embodiments may be implemented. The example computing system 100 generally includes a computing device 110, a database 120, a terminal 130, and network 140.

[0020] The computing device 110 may be a server or a computer terminal associated with a retailer. The computing device 110 may include processor 10 and memory 12. The memory 12 may store various modules or executed instructions/applications to be executed by the processor 10. In some embodiments, the system may include a data pre processing module 14 and an average inventory predicting module 16.

[0021] The computing device 110 communicates with the database 120 to execute one or more sets of processes. The database 120 may be communicatively coupled to the computing device 110 to receive instructions or data from and send data to the computing device 110 via network 140.

[0022] The database 120 may store customer transaction history which includes all purchased records in retail stores during a period of time (e.g., a day/week/month/year). The database 120 may store data associated with existing items, new items and added items. Existing items are present in stores across a training period and a validation period. New items are present in the training period as well as the validation period. In some cases, performance of the new items has increased manifold in the validation period as compared to the training period. The set of the new items are considered as a proxy of new items. Added items are referred to as items newly added in the validation period, but are not present in the training period.

[0023] The terminal 130 may represent at least one of a portable device, a tablet computer, a notebook computer, or a desktop computer that allows a category manager or customer to communicate with the computing device 110 to perform online activities via network 140.

[0024] The network 140 may include a satellite-based navigation system or a terrestrial wireless network, Wi-Fi, and other type of wired or wireless networks to facilitate communications between the various networks devices associated with the example computing system 100.

[0025] In some embodiments, the data pre-processing module 14 is configured to acquire and preprocess the data required by the average inventory predicting module 16. The data pre-processing module 14 can obtain the retailer's transaction sales data, especially the item-based transaction history at Point-of-Sale (PoS) devices in a retail store.

[0026] The item-based PoS history or data records may include item data associated with units, price, and inventory, etc. The data may be associated with sales data of items (e.g., products) in a category in a given period (day/week/month/year). All records associated with the items may include item attributes, such as brand, fmeline, dimension, item type, pack size, etc. The descriptions of multiple items are grouped together to form a fmeline description. Each item may be characterized by a unique set of attributes. The item attributes differ from item to item. For example, the brand attribute may have different values for each of the items in the same category. The items may be present in the retail stores during a training period or a validation period. The training period may be 1 year pre mod drop. Other time frames may also be used. The validation period may be 26 weeks post mod drop.

[0027] The system may take a plurality of approaches to find the similar items for each new item, for example, using KNN algorithm and Gower’s distance measures. For the similar items, the system may use aggregation techniques to predict two different target metrics, such as a turn and an inverse turn, based on the similar items. Further, the system may obtain the predicted value of an average inventory with newly added items using the predicted target metrics.

[0028] In some embodiments, the system may use 3 different modules or methods for data classification to identify similar items, including:

1) a first stage similarity module with K-Nearest Neighbors algorithm;

2) a second stage similarity module with K-Nearest Neighbors algorithm; and

3) a second stage classification module with K-Nearest Neighbors algorithm.

[0029] A First Stage Similarity Module with K-Nearest Neighbor Method and Gower’s Distance

[0030] A first stage or simple Similarity Module for the new items is performed using the K-Nearest Neighbor and Gower’s distance. K-Nearest Neighbor analysis is performed on each of the new items and a data set of training items using the attributes as well as POS data of the items. The data set of the training items may include the existing items in a category.

[0031] The distance metric considered is the Gower’s distance (a measure of

dissimilarity). Gower’s distance may be calculated for each of the new items with the data set of the training items with regard to mixed type variables, such as nominal and continuous variables. The mixed type variables may be associated with different attributes as well as POS data of the items. The KNN algorithm will search through the data set of the training items for the K-most similar instances. For example, when K is chosen to be 20, the module definition can be modified to obtain the K% of most nearest items to each of the new items. The process does not depend on the size of the category under consideration.

[0032] The first stage modification using the K-Nearest Neighbor algorithm may also be included and applied to the following two methods. The following two methods using the K-Nearest Neighbor algorithm with additional processes may be used to restrict a size of similar items obtained and to generate a more meaningful and diminished data set of similar items for each new item.

[0033] A Second Stage Similarity Module with K-Nearest Neighbor Method

[0034] First, the 50-Nearest Neighbor algorithm is performed for each of the new items, using only the attribute information. Second, another 30-Nearest Neighbor is run to further filter down the data set of similar items by using the related POS data. Thus, a set of much reduced and most similar items is obtained for each new item.

[0035] A Second Stage Classification Module with K-Nearest Neighbor Method

[0036] In some embodiments, a plurality of simulations are performed in the second stage classification. Each simulation may be conducted with the following example steps.

1) Randomly choose 50% of the new items.

2) For each of the new items, the 30-Nearest Neighbor algorithm is run using both the attributes as well as POS data, to identify the similar items.

3) New metrics are introduced for each of the similar items, based on their UPSPW, store count and price in the pre period as compared to the post period.

4) Using the data set of predicted similar items, the new item’s turn value can be predicted and compared to its observed turn. The error in turn prediction can be obtained for each of the new items chosen in the simulation.

5) After each simulation, for each new item, the data set of similar items is stored along with their comparison metrics and the error in turn prediction.

[0037] The metric associated with the error in turn prediction can be Symmetric Mean Absolute Percentage Error (SMAPE). The turn for each of the new items in the post period is calculated based on the module. A SMAPE is calculated based on the relative difference of the predicted turn compared with the observed turn.

[0038] After a number of simulations are performed, for example, 200 simulations, a decision tree is created on the comparison metrics of the similar items. The decision tree is basically used to filter out those similar items which may potentially lead to an erroneous prediction.

[0039] Finally, another 30-nearest neighbor algorithm may be performed on the entire data set of new items. For each new item, the obtained decision tree is used to discard those similar items which may potentially pull the predicted turn away from the observed turn. The system may update and obtain a much reduced and reliable data set of the similar items. [0040] Aggregation Techniques

[0041] Once the data set of similar items is obtained for a given new item, for example as described above, the target metric of each of them is considered to come up with one value of the target metric as a turn prediction. In some embodiments, weighted mean and linear regression are aggregation techniques used for predicting the target metric.

[0042] 1. Weighted Mean

[0043] For each new item, a weighted mean of the target metric is considered for the data set of similar items and similarity scores of the similar items are considered as weights. The similarity scores are obtained from the K-nearest neighbor algorithm with Gower’s distance, based on their relative nearness. The similarity scores between the similar items and each of the new items are aggregated and the weighted mean of the similarity scores is calculated to obtain one predicted value of the target metric.

[0044] 2. Linear Regression

[0045] For each item, a linear regression algorithm can be fitted on the data set of similar items where the target metric is a dependent variable and the POS data is the predictors (e.g., predictor variable or independent variable). Using the fitted linear regression, the target metric can be easily predicted for the new item.

[0046] Final Choice of Approaches

[0047] For a given category, once all approaches, in this example (e.g., 3 methods * 2 target metrics * 2 aggregation techniques) are performed, the mean error of prediction is calculated for each of the approaches. More or less of the possible number of approaches may also be used.

[0048] In some embodiments, the system may calculate and obtain mean errors in turn predictions from all the different modules. The system may calculate mean errors of the predictions to identify the approaches with a set of turn predictions in which mean errors are lower than a dynamic threshold. For example, the dynamic threshold can be predefined as 1.2 times of a minimum of the mean errors of the turn predictions.

[0049] Finally, the identified approach(s) can be performed on the data set of newly added items and an average inventory of the prediction is considered as the ultimate prediction.

[0050] From Target Metric to Inventory

[0051] The turn may be a direct and logical metric. In some embodiments, it is assumed that similar items have similar turns. The inverse turn can be logically extended to any one-to-one function of turn. Further, it can be noticed that very low and very high turns for items usually indicate erroneous turn predictions. Thus, the inverse turn is considered to expand the range of very low turn and at the same time diminishes the range of very high turns.

[0052] The average inventory (represented by avg inventory) can be obtained by using the target metric turn based on the following equations. The average price (e.g., avg price) represents the average price per unit.

[0053] The turn and average inventory are defined with equations (1) and (2).

avg price * UPSPW

turn = - (1)

avg inventory

avg price * UPSPW

avg inventory (2)

[0054] Similarly, the inverse turn and average inventory (avg inventory) can be defined with equations (3) and (4). avg inventory

Inverse turn = (3)

avg price * UPSPW avg inventory = inverse turn * avg price * UPSPW (4)

[0055] In some embodiments, a predicted value of the average inventory with newly added items can be obtained by using the turns and inverse turns of the similar items. If the target metrics of the turns and inverse turns are predicted, the predicted value of the average inventory can be easily obtained.

[0056] Obtaining Confidence Metric

[0057] In some embodiments, a value of confidence may be required to evaluate the turn prediction for each item. The confidence of the turn prediction may be independent of the predicted value of average inventory of its own as well as of other items. A decision tree may be used for the same purpose.

[0058] For a given category, once all of 12 approaches (3 methods * 2 target metrics * 2 aggregation techniques) are performed, the system can access the predicted average inventory which are obtained for each new item using each of the 12 approaches.

[0059] Additionally, the system includes information of a magnitude of the error in prediction in each of scenarios (e.g., approaches). The predictions with less than 30% error may be tagged as‘High Confidence’. The predictions with more than 50% error may be tagged as‘Low Confidence’. The rest of the predictions may be tagged as ‘Medium Confidence’.

[0060] A decision tree can be set up with the tagged information as classes, and variables such as UPSPW, store count, brand share, fmeline share and price as explanatory variables. The decision tree may consider inherent category information about the items, such as average price, variance of price, number of existing items, number of new items, category share in a department, etc. The decision tree is trained on the category information and the module that yields minimum error for the same. For every new item in the same category, the decision tree can predict which module provides the least error for the new item, based on the same category information associated with the similar items. This decision tree may be used to obtain the confidence in the prediction for each of the newly added items, which is totally independent of an item’s own predicted value of average inventory as well as that of the other items.

[0061] Obtaining Estimated Impact

[0062] In some embodiments, the system includes a Heuristic baseline module. The Heuristic baseline module can be used for obtaining the average inventory of new items. The Heuristic baseline module may be used to obtain the following data sets, including:

1) a first mean average inventory of existing items having same brand;

2) a second mean average inventory of existing items having same fmeline; and

3) an average of the first mean average inventory and the second mean average inventory.

[0063] In some embodiments, the data set obtained by the Heuristic baseline module is considered as a baseline or comparison point. For example, the Heuristic baseline module is used to obtain the average UPSPW of all the items having the same fmeline that a new item belongs to. With the turn prediction, the UPSPW and an average price of the new item, the average inventory of the new item can be obtained based on the equation (2).

[0064] FIG. 2 is a flowchart diagram illustrating an example process 200 for predicting an average inventory with new items in accordance with some embodiments.

[0065] The process 200 may be implemented in the above described systems and may include the following steps. Steps may be omitted or combined depending on the operations being performed. [0066] In step 202, the data pre-processing module 14 may aggregate sales data of a plurality of items in a category during a given period. The items may include training items and new items newly added into the inventory. The sales data of each of the items include attributes and an item-based PoS data;

[0067] In step 204, the system may use a set of predefined rules to identify a data set of similar items on the training items for each of the new items. The set of predefined rules includes a first stage similarity module, a second stage similarity module, and a second stage classification module. The data set of the similar items may include variables associated with each of similar items, such as turn, inverse turn, UPSPW, average inventory, average price, POS data, store count, attributes, brand share, fmeline share, price, variance of price, category share in a department, etc. In step 206, the system may obtain target metrics for each of the new items. The similar items in a category may have similar turns. The target metrics for a new item are considered as turn predictions from the data set of the similar items.

[0068] In step 208, the processor may be configured to calculate mean errors of the turn predictions to identify a set of turn predictions with mean errors lower than a dynamic threshold. For example, the system can identify the approach(s) with mean error less than 1.2 times the minimum mean error of turn prediction.

[0069] In step 210, the processor may be configured to obtain an ultimate turn prediction for each of the new items by averaging the set of turn predictions.

[0070] In step 212, the system may predict an average inventory for each of the new item based on the ultimate turn prediction.

[0071] FIG. 3 illustrates an example computer system 300, which may be used to implement embodiments as disclosed herein. The computing system 300 may be a server, a personal computer (PC), or another type of computing device.

[0072] The exemplary system 300 can include a processing unit (CPU or processor) 320 and a system bus 310 that couples various system components including the system memory 330 such as read only memory (ROM) 340 and random access memory (RAM) 350 to the processor 320. The system 300 can include a cache of high speed memory connected directly with, in close proximity to, or integrated as part of the processor 320. The system 300 copies data from the memory 330 and/or the storage device 360 to the cache for quick access by the processor 320. In this way, the cache provides a performance boost that avoids processor 320 delays while waiting for data. These and other modules can control or be configured to control the processor 320 to perform various actions. Other system memory 330 may be available for use as well. The memory 330 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a computing device 300 with more than one processor 320 or on a group or cluster of computing devices networked together to provide greater processing capability. The processor 320 can include any general purpose processor and a hardware module or software module, such as module 1 362, module 2 364, and module 3 366 stored in storage device 360, configured to control the processor 320 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 320 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.

[0073] The system bus 310 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 340 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 300, such as during start-up. The computing device 300 further includes storage devices 360 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 360 can include software modules 362, 364, 366 for controlling the processor 320. Other hardware or software modules are contemplated. The storage device 360 is connected to the system bus 310 by a drive interface. The drives and the associated computer-readable storage media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computing device 300. In one aspect, a hardware module that performs a particular function includes the software component stored in a tangible computer-readable storage medium in connection with the necessary hardware components, such as the processor 320, bus 310, display 370, and so forth, to carry out the function. In another aspect, the system can use a processor and computer-readable storage medium to store instructions which, when executed by the processor, cause the processor to perform a method or other specific actions. The basic components and appropriate variations are contemplated depending on the type of device, such as whether the device 300 is a small, handheld computing device, a desktop computer, or a computer server.

[0074] Although the exemplary embodiment described herein employs the hard disk 360, other types of computer-readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs) 350, and read only memory (ROM) 340, may also be used in the exemplary operating environment. Tangible computer-readable storage media, computer-readable storage devices, or computer-readable memory devices, expressly exclude media such as transitory waves, energy, carrier signals, electromagnetic waves, and signals per se.

[0075] To enable user interaction with the computing device 300, an input device 390 represents any number of input mechanisms, such as a microphone for speech, a touch- sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 370 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 300. The communications interface 380 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.

[0076] The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. Various modifications and changes may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure.