Advertisement placement is a double-edged sword.
You would like to avoid ruining your fancy design, but at the same time, you could leverage advertisement content to foster discovery, and you might find new ways to scale your business.
You have to improve both customers’ and brands’ experience, as all things should be.
There is nothing worse than sharing an intrusive Ad that does not add value to the user.
An example? I am bald, so please don’t waste your money on advertising shampoos to me 🙂
During the years in Everli we introduced new ways for our Brands to share their products and at the same time provide a benefit for our customers.
Think about our cashback, every week there could be a new way to get discounts in the common buy X pay Y offers.
The big problem with that implementation was the time.
In both cases, we implemented in a rush since there were no assumptions for a complex solution (and we were learning), thus we went ahead with a simple time-based on/off implementation.
Last year we decided to change the status quo.
If you think about it, a time-based model is not “fair”, since weeks could have a different spike of users, expected and not expected.
Think about a TV commercial campaign or the back-to-school month or a viral video.
Meanwhile, a performance model, where brands only pay when the ads they have placed lead to specific results will scale our business.
More slots, more time, proper fees, and the right items for the right customers could be an improvement for everyone.
Balance is the word.
After building a dedicated squad, the first thing we had to decide was the usual make VS buy, since in the scene there is no tool able to handle a multi-store catalog.
Let’s think about Amazon, where there is one huge catalog not N different catalogs with shared items across them.
A proper example could be the item “Nutella”, which can be available in the first store with a price, in the second with another price meanwhile it is not available for the third.
Timing is another important requirement in two different ways.
The first one, outbound, is the setup and propagation of the records.
Every night we are updating our databases with retailers’ data.
We are talking about ~58M prices in 4 countries.
We can’t wait the entire morning for the update otherwise customers and shoppers are not able to find the right prices and stocks.
Same behavior for the runtime requests from the customers (inbound).
We can’t chain different server to server HTTP calls in order to serve a customer API request.
We should be incredibly fast.
As a first thing, we choose to follow the buy way.
Right now our goal is not to build an AdServer but to provide highly demanding customers with peace of mind.
As usual, we would like to test our assumptions first, and if necessary iterate.
The vendor we choose was Kevel: we appreciated the quality of documentation, their API-first approach and the speed of the integration.
Right now there is no official PHP library, hence we are developing ours and then we plan to release it as an open-source project.
The final design was to store everything on our side, all the active promotions are stored in our database and reflected in Kevel.
The Stakeholders are planning campaigns in our internal dashboard, then via API request we are forwarding the changes into the AdServer, storing persistent configuration inside the Database meanwhile, active campaigns are in Memory.
Now, do you remember the issue with the size of our catalog?
Following the same approach of the Campaign, we do not care anymore about sending catalogs to the AdServer, removing all issues like data-breaches, data-consistency, and data backpressure.
We are storing active items with a positive stock and an active campaign inside our Memory level, updating the references anytime a product has been updated from our colleagues following the event-driven design.
When our service needs to reply to an Ad request for a specific store, we are aware of active items with an active promotion, asking the AdServer to choose only between in-store available campaigns.
Ok, the setup is ready, let’s start serving Ads to our customer base.
We cannot chain API requests otherwise the necessary time to reply to a single request is the sum of all the requests.
To solve this, a temporary queue is created for a specific request and following the RabbitMQ Remote Procedure Call flow, the flow has been split into two distinct chunks and assembled as a last step.
In this case, the necessary time for the whole process is the time of the slowest chunk.
Of course, all the solutions need tuning.
We quickly discovered a pretty challenging issue with the saturation of the Queue.
With an entire asynchronous flow is pretty common to consume your computational consumer’s power considering the importance of messages being equal.
For example, a telemetry message should not be consumed before an Ad request one for example.
The quick and effective implementation we used was following the previous post with priority queues.
Now we are consuming first priority messages and then when the spike is over, we could take proper time with the remaining ones.
We have been able to design, build and deliver all of this in just a quarter! 💪
We are migrating all the previous time-based touchpoints, which as usual will take time, but we believe the direction is clear and straightforward.
We keep iterating, and we keep improving.
If you want to tackle the evolving challenges that we face, check out the current openings.