(mon-fri) 7:00-20:00

European B2B Data Enrichment in 2026: 15,000 Real Enrichments, 22 Providers, 12 Countries

April 29, 2026

What we learned running 15,000 contact enrichments through the European data-provider ecosystem for 28 real B2B companies. Published independently by Profitbl.

15,000 enrichments · 90 campaigns · 28 companies · 12 countries · 22 providers

TL;DR


     

     

     

     

     

     

     

     


Chapter 1, Why this study exists

You run a 20M EUR ARR B2B SaaS company in Europe. You ask your team which data tool you should use for European outbound. You get five different answers, three vendor pitches, and a G2 page full of reviews from companies that look nothing like yours. Nobody can show you the data.

There are more than 200 B2B data providers selling into the European market (vendor directories like G2 and Capterra hold partial lists, none independently audited). There is no independent comparison of them at scale, by real usage, with transparent methodology. G2 has reviews. Vendor blogs have vendor claims. Comparison sites are affiliate-funded. Nobody publishes the data you need to decide which tool to buy.

We decided to change that. Not out of altruism, we would benefit from knowing the answer ourselves. We run an outsourced B2B prospecting agency . We enrich data for 28 businesses across six European markets. We process roughly five thousand European B2B contacts per quarter through various tools. The measurements exist as a byproduct of our operational work. This report is what happens when we pause and look at what that byproduct says.

A few things make this study different from the provider comparisons you've probably already read:


     

     

     

     

     


We should flag one thing before you read further: while building this dataset, we discovered that a large fraction of what the industry presents as "provider comparisons" is actually just measuring waterfall position, not provider quality. We'll show you why that's true below. It's the single most important concept for understanding our findings, and for reading any other provider benchmark with fresh eyes.

Chapter 2, The methodology

What we measured

For every contact processed through our enrichment stack, we captured: email fill rate, phone fill rate, phone quality (mobile, landline, malformed, or foreign), country-code accuracy, provider attribution (which provider returned the winning record), and cohort metadata (country, company size, industry, seniority tier, list type). All of it is logged into a structured Notion database we call the Enrichment Tracker. The patterns you see in this report come from that database.

What we have not yet measured

This is the single most important thing to understand about the 2026 Interim Findings.


     

     

     

     


These are the metrics that define true accuracy, as distinct from fill rate. A provider that returns a number that bounces or reaches a disconnected line is useless, regardless of how high its fill rate looks on paper.

We deliberately did not try to measure reply rate or meetings booked per provider. Those outcomes depend on messaging and targeting, not on data. A great list with poor messaging produces no meetings; a weak list with great messaging produces some. Data quality is upstream of those outcomes; separating the two cleanly is the only way to talk about either honestly.

This is exactly why we call this report Interim Findings rather than The European B2B Data Benchmark. The accuracy layer is being instrumented right now via Instantly (for email bounce data) and CloudTalk (for phone connect and disposition data). As new client campaigns enrich, send, and dial from 2026 onward, their bounce, connect, and wrong-number rates will be attributed back to the provider that supplied each contact. The first quarterly accuracy update will be published in Q3 2026. A full v2 report combining fill rate with accuracy follows by Q1 2027.

The waterfall position bias

Most of our data comes from Clay waterfalls, where multiple providers are run in sequence: provider 1 sees every contact; provider 2 only sees contacts that provider 1 failed to find; provider 3 only sees the residual after 1 and 2; and so on.

This means "win share", the percentage of records a provider contributed, is primarily a function of waterfall position, not provider quality.

An example makes it concrete. Imagine a waterfall with three providers, A, B, C, running on 1,000 contacts. Provider A goes first, finds 700. Provider B runs on the 300 remaining, finds 150. Provider C runs on the 150 remaining, finds 50.

Win share by position:


     

     


But if we ran the same 1,000 contacts through each provider in isolation, we might discover:


     

     

     


Provider B looked worst by win share, but is actually the best provider overall. Position ate its visibility, not quality.

In our clients' waterfall configurations, providers were ordered by cost per credit, cheapest first, most expensive last. This was a rational business decision. You want your cheapest provider to catch the easy finds before you spend expensive credits. But it systematically suppresses the apparent performance of the expensive providers placed at the end. A provider like Datagma, late in most of our waterfalls, might be excellent on full-list data, and we'd have no way to see that from Clay data alone.

Implication for every "who wins" claim you might read anywhere: unless the comparison was controlled, same list, each provider in isolation, the comparison is measuring waterfall position rather than provider skill. Our data included.

Two datasets, two types of claims

To separate what we can defensibly say from what we can't, we classified every entry in our Tracker into two buckets.

Dataset A, waterfall-biased. Approximately 60 of our 90 campaigns. Provider win shares are distorted by position. Usable for quality signals (mobile vs landline rate, country accuracy, foreign contamination, malformed rate) and cohort reachability (what % of a persona ended up reachable, regardless of source). Not usable for absolute hit-rate rankings between providers.

Dataset B, standalone. Runs where a single provider processed the full list in isolation. In this report, Dataset B is small but pivotal: Kaspr stands alone on a 274-contact multi-country European industrial list. Eficy standalone on a separate engagement (pending publication in v2 once data is audited). Usable for real hit-rate claims, cost-per-clean-record, and full cohort-level quality.

Dataset C, provisional "pre-enriched". About 30 campaigns where contacts arrived with email/phone already filled by the client or a prior process. We're re-checking these for v2; many likely belong to Clay or Kaspr passes that weren't fully attributed at logging time. Used cautiously in v1 for cohort reachability patterns only.

Anonymization

We serve 28 businesses whose data contribute to this study. None of them is named. Every cohort is described by its structural properties, "a 274-contact multi-country European industrial C-level list", "a 519-contact UK enterprise in-house legal counsel list", rather than by the client that commissioned the work. This preserves client confidentiality and makes the findings more honest about what's generalizable versus what's specific to one company's list. This anonymization standard tracks the GDPR.eu guidance on processing B2B contact data: descriptive properties of cohorts can be published; identifiable client lists cannot.

Providers, on the other hand, are named in full. There are no anonymized "Provider X" placeholders. We believe readers deserve specifics, and we are independent enough of every provider in the study to give specifics.

Known blind spots in v1


     

     

     


If one thing survives your reading of this report, let it be this: a fill-rate benchmark measures fill rate. An accuracy benchmark measures accuracy. A waterfall win share reports who finished first inside one specific waterfall. Our numbers, like most numbers you've read on this topic, fit the first category. The accuracy layer is coming.

Chapter 3, Geography: data quality varies enormously across Europe

How deep the waterfalls run

Before the country-by-country findings, a credibility note. Most of our client waterfalls run 4 to 7 providers in sequence per channel, phone and email handled separately. Our largest configurations stack 8 providers on the phone side alone, including Kaspr, Wiza, Forager, ContactOut, Datagma, LeadMagic, Findymail, and Clay's first-party finders. The point: every contact in our dataset has been attempted by multiple sources before being marked unreachable. When we report low fill or low quality in a country, that reflects real provider-coverage limits in that country, not enrichment shortcuts on our end.

If you build your European outbound strategy assuming every country enriches the way France does, you will lose a lot of money and time. Geography effects in our data are large, consistent, and shaped as much by LinkedIn penetration and local data-scraping culture as by the providers you choose.

3.1 France, the easiest European country to enrich

Across our French campaigns, mobile fill rate ranged from 70% to 95%, and mobile quality was consistently 93 to 99%. Foreign contamination on France-only lists was typically under 3%. In several campaigns, we observed a 100% FR-mobile rate, no landlines, no foreign, no malformed numbers, something we did not see clearly in any other country.

Three reasons we think France enriches this well. First, French B2B executives are heavily active on LinkedIn. Second, mobile number sharing is culturally normal on French professional profiles. Third, France has strong native enrichment tools (Kaspr, especially) competing with US-centric providers for the same contacts, which raises the quality floor.

As a shorthand benchmark: on a Paris-metro IT/tech list of 494 C-level and Director contacts, we saw 90% email fill, 77% phone fill, and 99.5% mobile quality, only one landline and a handful of foreign numbers across nearly 400 phones. That is the European data-quality ceiling.

3.2 UK, high email fill, fragmented phone results by persona

UK email fill rates are consistently excellent: 85 to 100% across our campaigns. Mobile quality is similarly strong at the mid-market, 90 to 95% on operations and director-level cohorts. The story changes sharply at the top of the org chart in small and mid-sized enterprises.

On lists of Managing Directors and CEOs at UK companies below 200 employees, we saw landline rates of 8 to 20% on mixed waterfalls and up to 36% from specific providers whose finds we can isolate. The data source drives this, not the provider; senior executives at smaller UK companies are often reachable only through the company switchboard, and directory-scraped tools surface those numbers as "found" phone data.

Country-code validation is mandatory on UK lists before any dial session. We observed elevated US/Canadian-code contamination on UK sales-leadership cohorts, but want to verify the magnitude rigorously by cross-checking each contact's location against the returned country code before publishing a rate. Flagged for v2.

3.3 Benelux, Netherlands clean, Belgium landline-heavy, Luxembourg multi-jurisdictional

The three Benelux markets look nothing alike in our data.

The Netherlands behaved most similarly to France. On NL-only retail and enterprise lists, mobile quality was 90 to 95%, reachability routinely exceeded 94%, and foreign contamination was near zero. In one Clay waterfall on a 115-contact Dutch enterprise list, every provider we saw, Wiza, Forager, Datagma, ContactOut, LeadMagic, returned 100% mobile quality. No landlines across 79 phones.

Belgium performed noticeably worse. Mobile quality ranged 65 to 92% depending on the persona cut. Senior cohorts (founders, CEOs) returned landline rates of 10 to 22%. We suspect Belgian B2B contacts are more often surfaced from directory-scraping sources than from LinkedIn-based providers, which pushes more switchboard numbers into the "found phone" column.

Luxembourg behaved most variably: senior-cohort mobile reachability ranged 50 to 95% depending on which upstream sources fed the stack. The upstream-source choice mattered more than the country itself. International law firms with Luxembourg offices showed a separate, persistent pattern: 40% US contamination on returned mobile numbers, driven by dual-jurisdiction partners whose LinkedIn profiles sit in Luxembourg but whose mobile numbers remain US-based. For cybersecurity cohorts specifically, where regulatory context drives a lot of the targeting (NIS2, ENISA guidance), the contamination problem compounds: the right person sits in a Luxembourg subsidiary, but the number returned belongs to a New York parent.

3.4 DACH, the hardest European enrichment territory

On generic DACH cuts, "Swiss software companies, 11 to 200 employees" or "German-speaking region, all industries", mobile quality dropped to 65 to 70%. Landline rates hit 22% on Swiss 11 to 50-employee lists. DACH was consistently our hardest territory, fragmented data sources, less standardized LinkedIn behaviour than France or the Nordics, and a strong Swiss directory-scraping tradition that pushes switchboard numbers into the "found phone" column.

DACH is inconsistently hard rather than universally hard. On a 191-contact list of German marketing leaders at furniture-industry companies, we saw 99% DE mobile quality. Zero landlines, only a handful of foreign numbers. Persona and industry tightness drove the result. Country played a smaller role. Broad "find me all DACH SaaS" cuts enrich badly. Narrow persona × industry cuts enrich well.

For DACH outbound, this implies a different waterfall strategy: spend more time on the input list than on the tool. A good input list enriches well even from a mediocre tool stack; a bad input list enriches badly even from a premium tool stack.

3.5 Nordics, phone fills better than email

The Nordics inverted what we expected. Email fill rates in our Nordic campaigns ranged from 31 to 57%, mediocre to poor. But phone fill, properly configured, ran 70 to 90%+, with 89% Nordic mobile quality on one 135-contact phone-heavy pass. On the same list, email fill was 31%.

Our read: Swedish and Norwegian first/last names carry enough Latin-character variance (diacritics, patronymic structures, dual-surname handling) that email-pattern-guessing algorithms underperform. But Nordic executives' mobile numbers, once found, are almost always clean, no landline tax, low foreign contamination. For Nordic outbound, we'd flip the channel: phone first, email second. Most buyers assume the opposite.

What the geography signal means for you

A single European outbound strategy, one provider stack, one channel mix, one reach-rate assumption, will underperform badly if your ICP spans France and DACH together. The mobile reachability gap between "mid-market French SaaS C-levels" (95%+) and "Swiss generic SMB founders" (under 50% net of landline cleanup) is large enough that treating them the same is a strategic mistake.

If you run pan-European outbound, budget differently per country. If you run a targeted country play, pick France first, UK second, Benelux third; save DACH and Nordics for when you have the maturity to treat them as their own problems.

Country and persona benchmarks in the cohort series

This study has eight country- and persona-specific benchmarks. Each one drills into a single market or cohort with provider-level numbers:

Chapter 4, Persona: who you target matters more than where they are

We started this study expecting geography to dominate data-quality variance. Persona dominated instead. The gap in reachability and phone quality between "operations managers in France" and "Managing Directors in France" is larger than the gap between "operations managers in France" and "operations managers in the Netherlands."

4.1 The C-level landline issue

On lists of Managing Directors at UK SMBs (companies under 200 employees), ContactOut alone returned 60% mobile / 36% landline. On a list of 335 French CISOs with merged directory sources, we observed 45% landline rate, the highest we saw anywhere in the study. On 200, Swiss cybersecurity directors merged with Kaspr + directory scraping, landline rate was 31%.

The pattern has a clear explanation. Senior executives at small and mid-sized enterprises are often listed on company switchboards, not personal mobiles. Directory-scraping tools, LinkedIn "contact info" field captures, and any enrichment source that crawls company websites will surface those numbers as "found" phone data. The enrichment tool is doing what you asked for. It is finding a number. You wanted a mobile that reaches the person directly. Those are different jobs.

If your ICP is MD/CEO at companies under 200 people, assume 20 to 40% of returned "phones" will be landlines and build a pre-dial cleaning step into your process. Don't dial blind.

4.2 Operations leaders, directors, and practitioners enrich cleanly

Operations directors, sales directors, technical leaders, and the functional leadership layer just below C-suite consistently returned 90%+ mobile quality across our data, regardless of geography. These personas are LinkedIn-active, mobile-native, and not hidden behind gatekeepers the way CEOs at small companies are.

On a 458-contact list of Managing Directors at UK manufacturers, Wiza returned 96% mobile quality. On a 166-contact list of Operations Leaders at UK manufacturing SMBs, Wiza returned 100% mobile quality. Same provider, same country, same industry, different seniority.

Chapter 5, Providers: what we can and can't say about each one

Read chapter 2 first if you haven't. Win share in our data reflects waterfall position. Provider skill is something else entirely. What follows is quality-signal observations, position-caveated usage patterns, and, for Kaspr specifically, the one real standalone hit-rate measurement in this report.

5.1 Kaspr

Kaspr is the only provider in this report for which we can make clean hit-rate claims about. It was used as a single-source first pass on a 274-contact multi-country European industrial C-level list before any waterfall fill-in. The numbers are therefore position-independent. Cohort context for this anchor sits alongside our broader client results at profitbl.com/case-study.


     

     

     

     


5.2 Clay as waterfall orchestrator

Clay is an orchestration layer. It calls into other providers' APIs and merges their outputs. The data sources are the providers underneath.

What we can say about Clay specifically:


     

     


Clay's value proposition is aggregation and orchestration. Accuracy is delivered by the providers underneath. Whether that aggregation layer is worth $495/month at its Growth tier depends on whether you have the volume and provider-diversity needs to justify it. Profitbl cancelled its Clay subscription on 2026-04-22 during this study. We're transparent about that conflict of interest: we were Clay customers, we're no longer Clay customers, and the data in this report shaped the decision.

5.3 Wiza

Wiza was placed first in many of our clients' phone waterfalls, which inflates its visible win share. That said, the quality where it's visible is consistently strong.


     

     

     


What we cannot conclude from our data: whether Wiza's 50 to 70% win share reflects provider strength or waterfall position. That requires controlled head-to-head testing, which is v2 work.

5.4 ContactOut

ContactOut shows the clearest provider-specific quality pattern in the whole study. You can act on it immediately.


     

     


The mechanism is documented in ContactOut's own documentation; it scrapes LinkedIn profile data, and the behaviour is consistent, not random. If your ICP is operations or mid-level functional leadership, ContactOut's mobile-accuracy rate holds up. If your ICP is CEO/MD at SMBs, either exclude ContactOut from your waterfall or place it last with a strict mobile-format filter on its outputs.

5.5 LeadMagic

When Icypeas was absent from the email waterfall, LeadMagic typically captured 60 to 85% of the attributed email wins. On lists where it took the first email position, UK SaaS, French IT, Luxembourg Head of Sales, it accounted for up to 85% of found emails.

Position bias caveat: LeadMagic looks like a very strong email provider in our data, but we haven't isolated it against alternatives. The story "LeadMagic is great" and the story "LeadMagic was placed first in most of our email waterfalls" both explain the data equally well. v2 controlled testing will separate them.

5.6 Forager

Across the UK, France, the Netherlands, and Germany, Forager returned 100% mobile quality in nearly every sample we observed. Not landlines, not foreign, not malformed, just clean mobiles, every time it hit.

In Nordic and CEE samples, Forager quality dropped to 85%, with occasional exotic-code misfires. In lists where Forager was placed first in the phone waterfall (several Profitbl client configurations), it captured 47 to 65% of phone wins.

If we were building a phone waterfall from scratch today based on Dataset A signals alone (with all the position-bias caveats), Forager would be our first-line recommendation for UK/FR/NL/DE markets. But we're making that recommendation with the understanding that v2 testing could change it.

5.7 Shorter notes on other providers

Datagma: 100% mobile quality in the samples where we saw it. Placed late in most waterfalls (more expensive per credit), so the win share is modest. On a 519-contact UK legal counsel list, its 52 wins held 85% mobile quality, the one sample where Datagma quality wavered, on a cohort with elevated directory-data exposure.

Findymail: reliably second or third in email waterfalls that include LeadMagic. Win shares 15 to 30% where present.

Enrow: appeared in newer Profitbl configurations and in Hyper Growth Show campaigns. Win shares 4 to 25%, depending on the waterfall position. Too little standalone data to say more.

Icypeas: placed first in many email waterfalls. Raw column fills 100% of rows, but the winning final Work Email rarely matched Icypeas output cleanly. The cause: an email verifier ran downstream of Icypeas in our flow and sometimes overturned its results, triggering fallback to another provider. Icypeas's true performance in our data is obscured; v2 will reattribute correctly.

Hunter: low win rate in most waterfalls (placed late, expensive per credit). Where it did win, on UK Retail marketing directors specifically, it contributed 13 to 16% of emails. Persona-specific strength, possibly.

Dropcontact: consistently low win rate. Placed near the bottom in most waterfalls.

Kitt: appeared in several Neoday waterfalls. Small absolute win share (1 to 5%) but 100% mobile quality when it did win.

Prospeo, Zeliq, People Data Labs, upcell, SMARTe, Surfe, RocketReach, Nimbler, Snov.io: all appeared in our data with small enough samples that we'd mislead readers by drawing conclusions. Each is on the v2 research list.

Running caveat for this chapter: everything above is waterfall-biased except the Kaspr anchor. Treat provider rankings as approximate; treat quality observations (landline rates, country accuracy, foreign contamination) as substantive.

Chapter 6, The cohort matrix: which providers to trust on which cohort

Provider behaviour shifts cohort to cohort more than most buyers expect. The same provider can deliver 100% mobile quality on a UK Manufacturing Ops Leader list and 56% on a Luxembourg compliance cohort. What follows is a compact lookup table, eight cohort cells where we have enough data (N ≥ 50, at least one specific provider signal) to make defensible recommendations. Each cell links to a deeper listicle that walks through the full evidence.

These are quality-signal recommendations, which providers to trust in this cohort. We do not publish absolute hit-rate rankings. Win shares remain waterfall-position-biased (see Chapter 2). The recommendations below are what the data lets us say honestly.

6.1 UK manufacturing, senior + ops leaders

N: 624 across two campaigns (458 MDs + 166 Ops Leaders). Finding: Wiza returned 96% mobile quality on UK manufacturing MDs and 100% on UK manufacturing Ops Leaders, the cleanest single-provider-cohort signal in the entire dataset. Recommendation: Wiza first on phone, LeadMagic or Icypeas on email. Don't over-engineer the waterfall on this cohort. Full breakdown: profitbl.com/blog/best-data-providers-uk-manufacturing

6.2 France IT/SaaS, C-level + Director

N: ~1,500 across multiple campaigns. Finding: France is structurally clean. Wiza, Forager, Datagma, and LeadMagic all return 99 to 100% mobile quality when they win. The Forager-first phone waterfall on a 494-contact French IT list returned 74% of phone wins at 99.5% mobile quality. Recommendation: Any reputable provider works. Cost optimization > provider choice. The discipline that matters is not merging directory-scraped sources into your stack; when that happens, landline rate jumps from <2% to 45%. Full breakdown: profitbl.com/blog/best-data-providers-france-saas

6.3 Luxembourg, compliance + law firms + cybersecurity

N: ~350 across four campaigns. Finding: Luxembourg is the most contaminated geography we measured. International law firms with Luxembourg offices show 40% US country-code contamination on returned mobiles (dual-jurisdiction partners). On Luxembourg finance-cyber cohorts, ContactOut returned 56% mobile quality, its worst sample anywhere in the study. Recommendation: exclude or strictly filter ContactOut in Luxembourg. Country-code validation is mandatory. Full breakdown: profitbl.com/blog/luxembourg-data-enrichment-pitfalls

6.4 DACH, tight cuts work, broad cuts don't

N: ~2,700 across DACH and contrast cohorts. Finding: a 1,382-contact "DACH 10 to 200 employees" generic cut returned 65 to 70% mobile quality with 22% landline contamination. A 191-contact "Germany Marketing + CEO at Furniture companies" tight cut returned 99% DE mobile quality. Same provider stack, different input lists. Recommendation: Spend more time on the input list than on the tool stack. Tighten industry × persona × size before you enrich; DACH rewards specificity. Full breakdown: profitbl.com/blog/dach-enrichment-tight-cuts-vs-broad

6.5 Nordics, phone first, email second

N: ~320 across three Nordic campaigns. Finding: email fill rates ran 31 to 57% (mediocre), but phone fill ran 70 to 90%+ at 89% Nordic mobile quality on the same lists. Most buyers assume the opposite. Recommendation: Invert the channel order on Nordic outbound. Forager + Datagma + Wiza on phone first; email as a secondary touch. Full breakdown: profitbl.com/blog/nordics-data-enrichment-phone-first

6.6 UK Retail Marketing, the Hunter exception

N: 580 across five Neoday campaigns. Finding: Hunter has a low win rate in most of our waterfalls, except on UK Retail marketing director cohorts, where it consistently wins 13 to 16% of email finds. We don't have a clear explanation for the persona specificity, but the pattern is repeatable across five campaigns. Forager-first phone, ~47% wins at 100% mobile quality. Recommendation: include Hunter in UK Retail marketing email waterfalls; deprioritize elsewhere. Full breakdown: profitbl.com/blog/best-data-providers-uk-retail-marketing

6.7 Belgium HR + Talent, persona variance dominates

N: ~1,400 across the Amélio (1,077) and Ataya Belgium (354) campaigns. Finding: mobile quality ranged from 65 to 92%, depending on persona cut, not provider. Senior cohorts (founders, CEOs) returned 10 to 22% landline rates. The same waterfall on Belgian HR managers vs Belgian founders produced quality gaps wider than the gaps between providers. Recommendation: Provider choice matters less than personal discipline in Belgium. Pre-dial mobile-format filter is mandatory on senior cohorts. Full breakdown: profitbl.com/blog/best-data-providers-belgium-hr

6.8 Multi-country European industrial C-level, the Kaspr anchor

N: 274 (Vango Solutions Enterprise TIER1, Kaspr standalone). Finding: Kaspr standalone returned 43% clean mobile fill, 85% pre-cleanup country accuracy, 16 European countries covered. Effective cost is €0.69 per clean mobile at public pricing. This is the only Dataset B (standalone, position-unbiased) hit-rate measurement in v1. Recommendation: Kaspr is the defensible single-source choice for multi-country European industrial outbound. Country-code validation mandatory (4% wrong-country rate even on European input). Full breakdown: profitbl.com/blog/kaspr-european-industrial-benchmark

6.9 What's missing from the matrix

We deliberately did not include cells where N falls below 50 or where provider attribution was unclear. Spain, Italy, Portugal, Poland and the rest of CEE all need more data before we can make cohort-specific provider recommendations, that's v2 work. Healthcare, government, and defence appeared as contact industries on our lists but never as cohort majorities; cohort-specific runs are flagged for v2.

We also did not include absolute hit-rate rankings within any cell. "Wiza wins more than Forager on UK Manufacturing" is not a defensible claim from waterfall-biased data, even when one provider visibly wins more often. Standalone runs per cohort are the only way to settle that question, and they're scheduled for v2.

If you want to apply this matrix to your own target list, the free Data Provider Selector (profitbl.com/tools/data-provider-selector) walks you through your country, persona, and industry mix and returns a recommended waterfall configuration based on the patterns in this study.

Chapter 7, The cost economics of European enrichment

7.1 Flat-rate vs pay-per-credit

The pricing models of European enrichment providers split into two camps: flat-rate subscriptions (Kaspr, Enrow, Icypeas, €29 to 59/month for 2,000 to 4,000 finds) and pay-per-credit orchestration (Clay's $495/month Growth tier for 6,000 credits, effectively $0.0825 per credit).

At any real European volume, dedicated single-source tools at the front of a waterfall come in cheaper per clean record than credit-based orchestration on the residual. Kaspr at €0.69 per clean mobile is roughly 2.5x cheaper than Clay's $1.84 per clean phone on gap-fill, even before accounting for the residual being structurally harder than the full list. Icypeas at flat-rate pricing for email is similarly cheaper than letting Clay spend credits late in a waterfall.

7.2 The gap-fill tax

After Kaspr's 274-contact first pass, the remaining 156 contacts cost $156.75 to attempt via Clay's waterfall. About 85 clean phones came out, 54.5% gap-fill rate, at $1.84 per clean phone.

Said differently: you pay the most per record for the contacts that everyone else also fails to find. The cheap providers skim the high-LinkedIn-exposure top of the market. The expensive providers get the hard residual. And the residual doesn't enrich at the same rate as the top; it enriches at roughly half the rate, at 25x the cost.

Design your waterfall accordingly. Cheapest provider first (flat-rate, broad-coverage). Expensive providers sized for the residual, not the full list. And at some point, the math stops working; a residual contact that's already been missed by three providers is probably not worth spending $3 to 5 in credits to find.

7.3 Why Profitbl cancelled Clay

On 2026-04-22, during the compilation of this study, Profitbl cancelled its $495/month Clay Growth subscription. The replacement stack is Kaspr + Eficy for phone (€59/month + per-number pay-as-you-go), Icypeas + Enrow for email (€35/month + $29/month). Projected annual savings: approximately €2,800.

This is a conflict of interest we want to name rather than hide. We were Clay customers. We're no longer Clay customers. The data in this report shaped that decision, specifically the gap-fill cost finding and the waterfall position bias realization. We are independent of Kaspr, Eficy, Icypeas, and Enrow as well; none have paid for placement, coverage, or favorable framing.

7.4 Cost per clean mobile by cohort

Our rough calculation at current public prices, using our measured clean-mobile rates:


     

     

     


These figures don't include the accuracy cost, bounced emails and wrong-number dials that show fill but produce zero pipeline. That's the v2 measurement. The true total cost per verified meeting, once we can measure it, will be meaningfully higher than these full-cost figures suggest.

Chapter 8, What you should actually do

8.1 Small European outbound teams (€0 to 5k/month enrichment budget)

Start with one phone source and one email source rather than orchestration. Kaspr covers 16 European countries on phone and is straightforward to subscribe to; Icypeas or LeadMagic cover email at flat-rate pricing. The two-vendor starting point handles a meaningful share of typical European outbound list volume without the orchestration overhead Clay introduces. Add orchestration only when you have four or more providers running in parallel; under that threshold, the orchestration value isn't worth the cost.

Validate country codes on every dial list before dialling. Expect 40 to 60% mobile reachability on mixed-country lists after cleanup. Budget twice as much dial time as you think you need; outbound is a volume game, and your first enrichment run will teach you more than any benchmark.

8.2 10k+ enrichments per month

Build a direct API waterfall. Orchestration layers add cost and latency at this volume; you can sequence the providers yourself. Order cheapest-first by credit cost, then add a post-waterfall quality filter that drops records with landline format, wrong country code, or malformed numbers before they ever reach dial. A ~15-minute engineering investment in that filter pays back immediately.

Cost-audit monthly. The metric that matters is cost per verified meeting. Cost per find is a vanity number. You won't have that metric without bounce + connect data, which means you need to instrument Instantly and your dialer against provider attribution from day one. (This is also how v2 of this study is being built.)

8.3 Executives at small/mid companies

Assume 20 to 40% of your "found phones" will be landlines or directory numbers. Build a pre-dial cleaning step; treat it as mandatory. Prioritize email channel; phone becomes secondary. Exclude ContactOut from the phone waterfall on senior personas, or place it last with a strict mobile-format filter.

If your ICP is specifically CEOs at sub-200-person companies in Europe, accept that this is one of the harder enrichment targets in the landscape. Budget accordingly: more research time per account, smaller lists, tighter personalization.

8.4 Pan-European outbound with a small team

Don't treat Europe as one territory. At minimum, separate your strategy into: (1) France, (2) UK, (3) rest of Western Europe, (4) DACH, (5) Nordics. Each bucket has different reachability rates, different channel mix priorities, and different waterfall economics. A single "European" budget averaged across countries will under-fund the cohorts that need the most enrichment investment to crack (DACH, Nordics) and over-fund the cohorts that already enrich cheaply on simple stacks (France). Eurostat's own labour market data shows the same fragmentation: B2B knowledge-worker density, language clusters, and SME size distributions vary enough across the five buckets that one enrichment strategy cannot cover them all. Teams that run pan-European outbound without country-by-country segmentation are usually the ones reaching out to us at profitbl.com/outsourced-sdr-services-for-b2b-saas after the first budget cycle disappoints.

Frequently Asked Questions

Which B2B data provider is best for European outbound?

There is no single best provider for all of Europe. France, the UK, and the Netherlands enrich cleanly from most major tools. DACH and the Nordics behave very differently and reward narrow targeting more than premium tooling. For multi-country lists, Kaspr was the only provider we could measure cleanly in isolation and returned a 43% clean mobile fill rate across 16 European countries at roughly 0.69 EUR per clean mobile.

How much does B2B data enrichment cost in Europe?

On a clean French C-level cohort, expect roughly 0.70 EUR per clean mobile through a single-source provider like Kaspr. On a UK mid-market operations cohort run through a Clay waterfall, the all-in cost lands around 1.50 EUR to 2.00 EUR per clean mobile. On a Swiss generic SMB cohort with low fill rate and high landline contamination, cost can exceed 3 EUR to 5 EUR per clean mobile. None of these figures include the bounce or wrong-number tax that only shows up once you start dialling.

Is Clay worth the 495 USD per month?

Clay is an orchestration layer. It is worth the subscription when you run four or more providers in parallel and need merged outputs, deduplication, and conditional logic. Below that threshold, single-source providers at flat-rate pricing usually deliver a lower cost per clean record. Profitbl cancelled its Clay subscription in April 2026 after this study revealed the gap-fill cost economics; the replacement direct-API stack saves approximately 2,800 EUR per year.

What is the difference between fill rate and accuracy in B2B data?

Fill rate is the percentage of contacts where a provider returns any value. Accuracy is the percentage of those values that are actually correct and reachable. A provider can have a 90% fill rate and a 50% accuracy rate, which means half the numbers it returned bounce, ring a switchboard, or reach the wrong country. Most published provider rankings, including v1 of this study, measure fill rate. Accuracy requires bounce data, connect data, and disposition data. We are publishing v2 with that layer in Q3 2026.

Why are so many "European" mobile numbers actually US numbers?

Two reasons. First, providers scrape LinkedIn activity history and parent-company directories, which often contain US-based numbers for European executives who previously worked in the US or whose company headquarters sits there. Second, on multi-jurisdiction cohorts like Luxembourg international law firms, partners hold genuinely dual phone identities. The wrong-country contamination rate ranged from 5% to 40% in our data, depending on the cohort. Country-code validation before any dial session is mandatory.

Should I enrich phone or email first for European outbound?

For France, the UK, and the Netherlands, either channel works well. For DACH, email is usually the safer first channel because phone fill drops on broad cuts. For the Nordics, invert the default: phone fill rates run 70 to 90% with clean mobile quality, while email fill rates run only 31 to 57% because Scandinavian name structures break most email-pattern algorithms.

Why do landline rates jump so much on senior executives at small companies?

Senior executives at companies under 200 employees are often listed on the company switchboard, not personal mobiles. Directory-scraping enrichment tools surface those numbers as "found phone" data. On UK Managing Directors at sub-200-employee companies, landline rates ran 30 to 40% from specific providers. Build a pre-dial mobile-format filter into your process. Landlines almost never reach the person you want to talk to.

Are any providers paying for placement in this report?

No. The 15,000 EUR in enrichment credits and subscriptions consumed during the 16-month measurement period was absorbed into operational client work. Profitbl has no affiliate relationships with any provider named in this report. Multiple providers we currently use come out looking bad in parts of the data; we left those findings in unchanged.

Chapter 9, What we don't know, and what's next

We're optimistic about what v2 brings. The infrastructure to capture accurate data, instantly for email bounce, CloudTalk for phone connect and disposition, is instrumented inside our own enrichment audit process. From every client campaign from now onward, bounce rates and connect rates flow back into the Enrichment Tracker, attributed to the provider that sourced each contact. Decay data, whether a phone still works six months later, comes from re-verification passes on earlier lists.

By Q3 2026, we expect 10 to 15 campaigns to have full accuracy data backfilled. That will ship as v2 of the study. By Q1 2027, we expect 40+ campaigns in the accuracy layer, which unlocks the full head-to-head ranking v2 aims for, "true cost per verified meeting by provider, by cohort", a number nobody has ever published.

We will also be running, in parallel with client work, targeted controlled studies: the same list through multiple providers in isolation, to give true head-to-head hit-rate rankings. These will be cohort-specific, French mid-market first, UK CEO second, Swiss SMB third, and published as quarterly updates.

If you want to see how these patterns apply to your own pipeline, book a 30-minute call at profitbl.com/book-your-growth-session. We will walk through your target geographies and personas and show you what the data says about your reachability ceiling before you commit to a provider stack.

If you want quarterly updates as the v2 accuracy layer ships, the European B2B Data Quarterly newsletter is where we publish ongoing findings and provider-specific analyses. Custom benchmarks on your own target list are also available as a paid engagement.

Methodology challenges, provider re-measurement requests, and corrections: info@profitbl.com. We respond to all of them, publicly in quarterly updates when the findings are material.

Appendix, Full provider roster and dataset taxonomy

Providers evaluated in this study: Kaspr, Wiza, Icypeas, Enrow, LeadMagic, Findymail, ContactOut, Forager, Datagma, Hunter, Prospeo, Dropcontact, Clay (orchestrator), Kitt, Nimbler, Snov.io, upcell, SMARTe, Surfe, RocketReach, People Data Labs, plus direct-scraping tools used inside some clients' waterfalls (Claygent).

Dataset split (v1, pending re-classification for v2):


     

     

     

     


Geographies covered: France, UK, Ireland, Belgium, Netherlands, Luxembourg, Germany, Switzerland, Austria, Italy, Spain, Portugal, Poland, Czech Republic, Slovakia, Hungary, Sweden, Denmark, Norway, Finland, Iceland, plus some US/Canada contamination across multi-country lists.

Industries represented: SaaS, cybersecurity, fintech, retail and retail-tech, HR tech, law, construction, hospitals and healthcare, manufacturing, advertising and marketing services, professional services, real estate, banking and capital markets, insurance, education and edtech.

Total investment: approximately €15,000 in enrichment credits and subscriptions across 16 months, absorbed into operational client work.

Published independently by Profitbl. 2026 Interim Findings. Next scheduled update: Q3 2026 (v2 accuracy layer, email bounce and phone connect data added).

Selector tool: profitbl.com/tools/data-provider-selector · info@profitbl.com

Take action today

So schedule your 30-minute introductory call today.

Stop riding the revenue rollercoaster and start confidently forecasting your growth

Unlock a systematic outbound channel that delivers consistent results month after month.

Book a Call Now