Update: Windsurf moves to daily/weekly credits pro/max plans - intros Glm 4.7/ Glm5

New Controversial Windsurf Pricing Model

The pricing model of windsurf this week has caused a stir… we’re all sitting with baited breath to see how this pans out, hopefully in our favour.

So now there is Pro ($20) and Max($200) (as well as team plans)… see Pricing | Windsurf for feature breakdown at bottom of page

Here is an estimate of the number of messages that you may expect to see:

Pro and Teams Max
Premium Plus (Opus 4.6, GPT-5.4, GPT-5.3-Codex) 7-27 messages / day 42-170 messages / day
Premium (e.g. Sonnet 4.6, GPT-5.2, Gemini Pro) 8-101 messages / day 47-631 messages / day
Lightweight (e.g. Haiku, Flash) 47-190 messages / day 291-1,190 messages / day

The usage limits assume a daily window. An additional weekly limit applies. The estimates are based on current usage patterns.

AI and BDD

It’s actually super hard for AI to work in BDD mode, and create specs BEFORE creating code, as it has to do the reasoning at the same time, and you may as well code out some of it for free while you are there, so that’s been a learning experience also working that balance out.

I have a daily and weekly quota and can buy extra credits if I go over, not sure what i think about that, I would have preferred a weekly/monthly quota, as some days I use it some days I dont, so will now need to create a bot that loads up a workflow of tasks, with testing to see how we go.

I’m a pretty heavy user at the moment as I’m rearchitecting a lot of Sql code to 2025 and updating my components to the next gen so been using claude (sonnet/opus) a LOT but the over-engineering causes you to put SOOOO many guiderails around it.

GLM 4.7 and GLM5 in beta on windsurf

So now that Glm 4.5 and 5 are in beta on windsurf and dirt cheap to use, could be good to see what I can get out of them this week.

Will put both in the ‘arena’ (another windsurf feature to compare two ai’s doing the same task to see the quality of output) to compare their output a few times also … so it’s going to be an exiting week as GLM5 is meant to be killer!

Winsurf Blog on new pricing model

Introducing our new Windsurf pricing plans


What’s changing:

  • No more credits. Your Free, Pro, or Teams plan includes a usage allowance that refreshes automatically on a daily and weekly basis. For the majority of users, this quota will be enough to fully cover all agent usage.
  • For paid plans, if you go beyond your included usage, you can purchase extra usage which will be consumed at API pricing.

The CASCADE and PLAN modes are incredibkle on this IDE, makes copilot look like a toy. (winsurf is built on Monaco of course like vscode so you can add in cfml extensions also)

Also the workflows, skills, rules and memories features are a lifechanger - moved totally away from instruction files now. I’ll post again about that once a get a full handle on how to configure the combo best.

And with Glm 4.7 at 0.25 credits (INSANE) and 5.0 at 1.25 credits thats insane value and abut 12 times cheaper than Claude atm…

Looking to self host the GLM 4.7 32B soon locally, that will be incredible…

ATM am using the SWE 1.5 for free on Windsurf also for everyday tasks, but it’s VERY simple,


If you want to check it out and get some freebies use this link : Get $10 in extra credits for Pro

This is getting close to being off topic, this is the Lucee forum yeah?

Tell us about how you’re actually using AI with Lucee, I’m super curious

I get what youre saying - but not everything in Lucee is just lucee - half of everyone’s dev is db also and javascript - so we can’t be too purist about things, that causes lag in our community - which in the past has held sometimes decades behind of other tech in terms of techniques and application., but also I’m not sure how talking about an IDE in the tools section of a developer forum is off topic.

I’ll try to put MORE useful info SPECIFICALLY for lucee in future. I just wanted to get this update info out there as for many small devs THIS IS A GAME-CHANGER!

Especially at the moment when smaller shops are strapped for cash, and AI IDE’s with the right setup can provide the much needed help they might not be able to afford.

I really think now with GLM 5 that within a month I’ll be able to write any module on the planet in good quality cfscript code in minutes by just pointing the ai at industry leading projects and a set of rules, workflows and memories in Windsurf.

That said I read EVERY line of code AI ever writes for me. I am meticulous in how code in my frameworks is written, and use AI to find industry standards, for security, testing and application that I can add to my ecosystem daily that I would never had know about an top of the 30 years of experience in many industries. You cant know it all, and AI is not always going to provide the solutions to everything, but right now in some areas these tools are killer.

I have a friend who writes the neomjs javacript framework ( an amazing cutting edge JavaScript framework that is nextgen ) and it’s built and maintained by devs using AI also, and he is able to crush hundreds of tickets a day with automated MCP server for his framework. I’d love to introduce you two as I think lucee could benefit in some way from this approach, especially with a small team, no doubt you already have your workflows, I wonder if his methods or Windsurf’s approach could help Lucee advance faster…

Anyway will post more detail about what I"m doing. But the whole point of telling people about this is because i have seen a 400% increase in output from moving from vscode to windsurf. I still code by hand and check every line along with AI doing boilerplate and deep thinking (moving from Claude to GLM), but like commandbox cli for coldbox generating boilerplate and sticking to standards, i can stand new modules and integrations in minutes, rather than days.

That is a HUGE competitive advantage, As winsurf has changed dramatically in the last month, I have to revisit my setup completely then will post something.

I need a Lucee MCP server right now, so short of baking one myself, for now I’ll just have to live with the setup I am revisiting. Has anyone baked a Luce 7.x MCP server? Would love to talk to them.

1 Like

I’ve been away from the Lucee dev forum for a while, so it was quite the surprise when dev.lucee.org was the sixth result in a duckduckgo search for “windsurf AI unpopular new quota pricing”.

I was like … whut the?!

Up until the quota pricing I was a huge fan of Windsurf and have been using it since before AI was added, back when it was called Codeium, just a simple telemetry-free fork of vscode.

It’s been quite the ride for the Windsurf company, as detailed in Markus Kasanmascheff’s fascinating WinBuzzer article Anatomy of a Collapse: The Wild Takeover Saga of Windsurf, Featuring OpenAI, Anthropic, Microsoft, Google, and Cognition.

Sadly with this devastating change not just in pricing, but more importantly workflow (because I am NOT about to pay $200/mth when I’m grandfathered in to Windsurf at their original $10/mth and what I’m using it for barely covers hosting and other expenses), I am now looking at Claude Code.

And I’m by far not the only aggravated Windsurf user. Many have been more vocal about it, especially in Windsurf’s own feature request page!

Unless they reverse course very soon, that ship is going down.

If they do manage to stay afloat after what will surely be a mass exodus of paying customers, I might keep using it only for their “free” (with minimum monthly subscription) in-house SWE-1.5 model which is actually pretty decent.

As for Lucee relevance, I’ll say that Claude Opus 4.6 is quite adept at Lucee, including differences in Lucee 7, no doubt because of all the improved documentation! :smiley:

2 Likes

sweet music to my ears!

3 Likes

Great to hear others feedback.

The cascade feature is great, i can run multiple tabs doing long running multiple task plans at the same time… i find the windsurf ai extension is far more than a simple vscode fork, even in datagrip i find the extended rules/memories/workflows really beneficial for decreasing my spend by improving the quality of the code. (does depend which model combos you use also)

I’ve actually been on the pro plan now for two or so weeks. Have to say I like it. I’m getting probably four-fives times more requests in the base level of pro, and haven’t needed to top up at all, and previously i was topping up 4-5 times a week. (so spending $50 a week,now ONLY on pro plan)… I guess I’m also using the models better (not just using claude for everything), and finding that memories/rules really help.

I actually prefer it to the old pricing model right now.

He’re’s my usage:

I would prefer quota was weekly with monthly tho, not daily with weekly, as this then means i can have hot and cold weeks, where this week I use 98% of my weekly quota and will be forced to pay for quota above that but then next week i’m away on holiday, so that makes it stupid, as then I pay for more this week and have some to spare next week on a MONTHLY plan?

I’ve saved about $150 a month in charges this month, and that’s with heavy SQL server 2025 and cfscript and neomjs (kick ass next-gen js framework)… with all the (so many) new features in sql 2025, it really is a killer multi-faceted db also, so spent plenty of time refactoring tables into nodes and ledger (blockchainish) tables alsong with a heap of geo stuff (and so good to have native regex now also)

I’m using GLM 4.7 a LOT more (and GLM 5 - 2 credits instead of claude sonnet ) as it is 0.25 credits compared to claude (sonnet 4.6 is around 8 credits and opus 12 credits) and using Kimi K2.5 (not SWE 1.5) for scaffolding and executing boilerplate updates (aka creating boilerplate for new modules where my rules and workflows and memories are guiding it.

GLM vs Claude - I’ve had far less over-engineering since i started using GLM more for lucee when architecting new items. Claude imho is still a little better at framework architecture, but GLM is super cheap also and does 90% as good of a job, but can be a little ‘forgetful’ of rules/memories. (and cant read images/screenshots)

That said i noticed on the model page they are also moving to input and output pricing for models, so i can see why they want to move away from credits and to a daily/monthly quota… going to see how that rides out.

Working toward getting a lucee MCP server setup in basic form also, with abilities for TeamCFML framework… so that’s the next challenge really.

1 Like