Quantcast
Future of online mastering, will be replaced by AI, plug-ins, homemade lowcost guys? - Page 3 - Gearspace.com
The No.1 Website for Pro Audio
Future of online mastering, will be replaced by AI, plug-ins, homemade lowcost guys?
Old 26th February 2021 | Show parent
  #61
Gear Addict
 
DBarbarulo's Avatar
 
Verified Member
1 Review written
🎧 15 years
think about the impact of physical copy prints we had in traditional releases on "perceived production value" by the client. Had he/she recorded, mixed on a budget, the professional mastering fee made more sense when ordering say 2000$ and up of CDs, 7000$ of vinyl, ensuring quality control and "protecting" client's money from manufacture defects.
If today you pay 50$ year to release unlimited tracks on internet platforms there's no more such possible manufacture defect. The worst case scenario is that your track will sound like [email protected], and that will change probably nothing. You will do better next time... or you should watch again that tutorial on youtube on "how to master any track in 3 simple steps" :D
Old 26th February 2021 | Show parent
  #62
Lives for gear
 
Verified Member
🎧 10 years
Quote:
Originally Posted by scraggs ➡️
Feel like we had this thread 5 years ago and someone was telling me I'd be out of business in 5 years. Still here, doing fine, same as ever.

Last year was busier than the year before.

Not really worried about it.
Five years ago? more like once every 3 days...
Old 26th February 2021 | Show parent
  #63
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
Quote:
Originally Posted by artech909 ➡️
I know what you talking about very well and "FFT curve" answer was very simplified version of your answer. I have deep discussions as audio engineer with my friends (programmers & AI designers) about audio related stuff. In the end we all came to the conclusion this is all about human feelings (you can't describe it in logical machine code) something very random and illogical (hard to predict stuff which you can't translate into code which create predictable beautiful audio results in every situation). Interaction with another human is another important part of this puzzle. An so on.
Training a machine learner is precisely the way to get around having to code a prescribed, rigid algorithm. You feed it a bunch of input vectors and (at least in supervised learning, which this is) a bunch of targets and you let the learner modify its weights until it can generate/classify/etc the input/output relationships that are inherent to the data set (ie a bunch of good masters). That’s why we use it to pull trends and meaning out of things like datasets of nebulous, extremely difficult to pinpoint medical phenomena. Stuff that’s more or less impossible for us (without machines) to quantify and categorize on our own.

I’m not saying I know how or when various learners will have Good Masters™️ nailed, but there’s nothing that systemically disqualifies them from doing it. If there’s a relationship there, some learner at some point can learn it.
Old 26th February 2021 | Show parent
  #64
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
Quote:
Originally Posted by SmoothTone ➡️
Can you get it to feel like a human does? I think that's where the important differences are.
It doesn’t need to ‘feel’ anything. If you feed it a few hundred thousand relationships between unmastered music and mastered music which we deem ‘good’ (a classification we make based on our feelings), it can learn what to do.

Not saying it’s been done, just that there’s nothing categorically disqualifying it from being done.
Old 26th February 2021 | Show parent
  #65
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
Quote:
Originally Posted by vyedmic ➡️
I'm no psychology expert and have not read any research about how intuition is understood today. Anecdata warning!

For me, intution is very close to feeling and it feels like a polar opposite of making a rational decision. In fact, there is often conflict between what the intution tells me and what my "reason" tells me. This tension is what shapes my creativity.

It is hard for me to imagine AI ever replicating this tension.
I don’t think you’re wrong, but I think it’s worth pointing out that a machine learner doesn’t need to replicate any feelings or tension or vibes or anything like that. It just needs to look at the choices that vibey, feeling, tense humans have made (lots and lots of them) and connect inputs to outputs. It’s the same reason you can end up with machine learners that incorporate various human biases...they pull relationship pathways out of what we already do, what we’ve done. They’re at the whim of the existent data and that existent data is chock full of all of our vibes, feelings and tensions.
Old 26th February 2021 | Show parent
  #66
Gear Addict
 
DBarbarulo's Avatar
 
Verified Member
1 Review written
🎧 15 years
some plugins manufacturers use a kind of "spyware" in their beta testers machines who can learn the way say an eq is used by the human. If the source spectrum is A which frequency moved to evaluate, in what time frame, and then chose to print Mr. Doe to go B? We may see Mr. Doe turning in a fancy Name, next to retirement, who will sign some AI algorithm...
Old 26th February 2021
  #67
Lives for gear
 
Analogue Mastering's Avatar
 
Verified Member
🎧 10 years
If we look at the plugins that currently offer the most “AI” probably MasteringTheMix and TEOTE would fome to mind.
But just picking genre presets, slapping an Elevate artist preset on top and use youlean to monitor your levels, still give you a mediocre result, if you don’t know what you are doing and if you don’t have proper monitoring / room.
EVERY track is composed differently, the 1 size fits all approach (or target curve approach) ruins 9 out of 10 tracks, as there is always something that suffers.
The gap is not bridged yet. Knowledge, listening, vision and a strategy to get there will remain indispensable.

So 300,00 of plugins and 5” genelecs against your untreated bedroom wall, won’t make you a ME.
Old 26th February 2021 | Show parent
  #68
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
My apologies for the burst posting. Just came across this topic and it’s something I find very interesting.

And I think it’s worth adding that there are a bunch of obstacles in the way of pulling this off perfectly. But I don’t think there’s anything definitionally preventing it from being done.
Old 26th February 2021
  #69
Gear Maniac
 
AI mastering will be the future once there is an actual AI mastering assistant on the market.
I am not aware of a real AI powered tool at the moment, the Ozone assistant is not an AI !

MEs reporting that they are doing fine should be aware that they still don’t have real AI powered competition.

Most people simply lack dedicated AI accelerator/machine learning hardware in their Systems so there is no real target audience to develop for.
The situation just recently changed with Apple shipping the neural engine with every M powered Mac, now developers have a clear target.

People questioning how the computer will control their analog gear should check out https://accessanalog.com/
Old 26th February 2021 | Show parent
  #70
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
Quote:
Originally Posted by Analogue Mastering ➡️
If we look at the plugins that currently offer the most “AI” probably MasteringTheMix and TEOTE would fome to mind.
But just picking genre presets, slapping an Elevate artist preset on top and use youlean to monitor your levels, still give you a mediocre result, if you don’t know what you are doing and if you don’t have proper monitoring / room.
EVERY track is composed differently, the 1 size fits all approach (or target curve approach) ruins 9 out of 10 tracks, as there is always something that suffers.
The gap is not bridged yet. Knowledge, listening, vision and a strategy to get there will remain indispensable.

So 300,00 of plugins and 5” genelecs against your untreated bedroom wall, won’t make you a ME.
ML isn’t about presets though. And the “knowledge”, “vision”, etc isn’t something the learner has to possess. It’s in the data. The learner just has to go “in the past, when presented with this kind of transient behavior, at these levels, with this frequency spectrum, for this dynamic range, etc, what did you people do?” “You people” being human mastering engineers with knowledge, experience, vision, taste, soul, vibe, etc.

All manner of challenges but nothing inherently disqualifying.
Old 26th February 2021 | Show parent
  #71
Lives for gear
 
Analogue Mastering's Avatar
 
Verified Member
🎧 10 years
Quote:
Originally Posted by Ragan ➡️
ML isn’t about presets though. And the “knowledge”, “vision”, etc isn’t something the learner has to possess. It’s in the data. The learner just has to go “in the past, when presented with this kind of transient behavior, at these levels, with this frequency spectrum, for this dynamic range, etc, what did you people do?” “You people” being human mastering engineers with knowledge, experience, vision, taste, soul, vibe, etc.

All manner of challenges but nothing inherently disqualifying.
That will be a hell of an algo, as even 2 similar genre track require different treatment, hell even 2 remixes of the same track. AI can’t decide on the sibbilant voice, wether to settle with a bit more ssss or go to dull. It can’t decide if the track needs “more cowbell” a ideal respons curve can’t cater for taste.

That’s the whole point. Ironing out everything in a genre target curve, is not the answer. A mathematical approach based upon BPM for compression settings is not a one size fits all band-aid. That’s the whole point.
Old 26th February 2021 | Show parent
  #72
Gear Addict
 
DBarbarulo's Avatar
 
Verified Member
1 Review written
🎧 15 years
we cannot predict how far we are from such technology. Fact remains that a bunch of real people will do the thing the good old way and a higher number will buy the plugin with (or the subscription to use) the brain of the top engineer. Results will vary for sure but in a bell of acceptance due to a fraction of a man made job's cost.
Old 26th February 2021 | Show parent
  #73
Lives for gear
 
FabienTDR's Avatar
 
Verified Member
🎧 5 years
Quote:
Originally Posted by DBarbarulo ➡️
The truth to me is that there's not enough financial resource in the business to sustain 13.000 operators doing the same thing (i'm sure they are far more than that).
This is fact. A broken (i.e. openly bleeding) balance in B2B demand and supply.

But I think this downplays the B2C prospects that have been gained. Consider how much you could make in this sector today with education, selling sounds a kits, plugin development, and and and. All servicing hobbyists rather than (struggling) businesses. This is a billion dollar market, still growing.

Hobby and fun is a sustainable ground for a market, servicing these can certainly be as honest, prestigious and rewarding as the old B2B models.

Last edited by FabienTDR; 26th February 2021 at 11:30 AM..
Old 26th February 2021
  #74
Lives for gear
 
Analogue Mastering's Avatar
 
Verified Member
🎧 10 years
But that's the whole point. It's not pure decision making, it's decision making in context.
I saw a TV show the other day, demonstrating a university project on AI Music. It sounded like lift music MUZAK. no life, no soul. It contained millions of analysed tracks and progressions, hundreds of genres. but everything that came out sounded bland.

with mastering, rebalancing and target loudness are only a small part of the equation.
Old 26th February 2021 | Show parent
  #75
Lives for gear
 
FabienTDR's Avatar
 
Verified Member
🎧 5 years
Quote:
Originally Posted by Jolosch ➡️
AI mastering will be the future once there is an actual AI mastering assistant on the market.
I am not aware of a real AI powered tool at the moment [...]
"AI" is a popular buzz word used to describe class of tools (neural nets, genetic algos, pathfinding, fuzzy logic, finite state machines, bayesian techniques, probability), each with distinct pros and cons.

At its very basics, even a simple feedback control loop/PID controller already belongs to it (e.g. thermostats, cruise control, fly by wire).


Connecting all this to something meaningful is an art form.

Old 26th February 2021 | Show parent
  #76
Gear Nut
 
AudiotalesDesign's Avatar
 
Verified Member
Quote:
Originally Posted by artech909 ➡️
About Noisia example... They invested very big money in studio designed by Northward Acoustics equipped with mastering grade ATC speakers and only then they started to do it by themselves. There is a BIG difference versus usual cheap home production studio.
Old 26th February 2021 | Show parent
  #77
Lives for gear
 
Jerry Tubb's Avatar
 
Verified Member
🎧 15 years
Quote:
Originally Posted by scraggs ➡️
Feel like we had this thread 5 years ago and someone was telling me I'd be out of business in 5 years. Still here, doing fine, same as ever.
Last year was busier than the year before.
Not really worried about it.
Yes QFT!

Ridiculous subject keeps popping up regularly, and articles in various online publications.

People (& journalists) trying to sound cool and in the know...

when they really don’t know jack squat (as we say in Texas).

I mastered about 250 projects last year 2020, during the whole nutty Covid scenario, un-attended, and 2021 is picking up nicely.

sure the neophyte home mixer in the bedroom with no budget may use some “A.I.” process, but the axiom holds true:

You Pay Peanuts, You Get Monkeys

So join me and Magnus Robot Fighter in resisting any form of so called AI

and fight back against the purveyors of skynet, who really just want your money.

Best, JT
Attached Thumbnails
Future of online mastering, will be replaced by AI, plug-ins, homemade lowcost guys?-f5da784b-852c-459f-a0c4-1bd646c132ab.jpg   Future of online mastering, will be replaced by AI, plug-ins, homemade lowcost guys?-4fdedaad-2f60-4aa9-b43c-4d7d7bd00c4c.jpeg   Future of online mastering, will be replaced by AI, plug-ins, homemade lowcost guys?-ea9eb76c-6550-4087-b338-c483377d5633.jpeg  
Old 26th February 2021 | Show parent
  #78
Gear Addict
 
DBarbarulo's Avatar
 
Verified Member
1 Review written
🎧 15 years
Jerry, is always nice to read good news, averaging 4/5 tracks per project you did around 1/1.3k tracks in 2020 and is great but, as usual, i don't think is a widely adopted standard.
Old 26th February 2021 | Show parent
  #79
Gear Head
 
Quote:
Originally Posted by Prototech ➡️
Yes Noisia is exact example what I mean that most EDM producers are going to master themselves while many of them like Noisia are asking for 30 - 50 000 for one live act as I heard so can afford any studio.

Split the atom was released 11 years ago, from then they didnt used any mastering studio and doing everything DIY right?
Right... so now we're talking about 1 group of very talented engineers who have rooms designed by Thomas. How does this compare to a single other living souls situation that we're discussing?
Old 26th February 2021 | Show parent
  #80
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by Jerry Tubb ➡️

People (& journalists) trying to sound cool and in the know...

when they really don’t know jack squat (as we say in Texas).
This. Perhaps the most significant commentary in this thread.

One should also add those armchair economists parroting around dubious data and ideas taken from the web.
Old 26th February 2021
  #81
Gear Head
 
We're also overlooking the most critical factor here which is the confidence that comes from a fresh perspective. It's the one main thing that sets experienced M.E's apart from 99.999% of artists and cannot so far as I can see ever be provided by A.I
Old 26th February 2021 | Show parent
  #82
Lives for gear
 
Trakworx's Avatar
 
Verified Member
3 Reviews written
🎧 10 years
Quote:
Originally Posted by DBarbarulo ➡️
... averaging 4/5 tracks per project you did around 1/1.3k tracks in 2020 and is great but, as usual, i don't think is a widely adopted standard.
I can't speak for anyone else, but in 2020 I mastered over 2,300 tracks. In 2019 I mastered more than 1,800 tracks. There's nothing special about me, so I think those kinds of numbers are probably pretty common for full time MEs. And notice how the trend is upward not downward.
Old 26th February 2021 | Show parent
  #83
Gear Addict
 
DBarbarulo's Avatar
 
Verified Member
1 Review written
🎧 15 years
Justin we are missing the point here. The fact is not how many clients a bunch of us are doing in a year. The interesting part would be spotting a trend for sake of discussion and statistics. If you, Jerry, and all are witnessing a growth of the market, when you talk with friends and colleagues the mood is the same that is a statistic positive trend. From this side of the planet the trend may appear positive personally but not statistically growing knowing personally several studios and professionals facing troubles both here and in different countries.
Old 26th February 2021 | Show parent
  #84
Gear Addict
 
DBarbarulo's Avatar
 
Verified Member
1 Review written
🎧 15 years
another interesting topic would be, if the averaging looks to be 1/2k tracks per year for each one of us, where the hell are going to be listened 4 millions of new tracks (to be added with the 50 millions published by diy producers) each year if people still listen to music from the last century?

Edit: and what to do to solve the problem we have with credit tracing since only 10% (maybe or less) of our work appears listed somewhere . Jerry said he worked on 250 projects in 2020 but on discogs only 4 appear. Justin did 2300+ tunes but only 16 jobs are tracked in 2020. Mr. Calbi has 100 records tracked for 2020 and maybe did 15.000 tracks last year if same % applies (that is 50 tracks mastered per day in 300 working days).

Last edited by DBarbarulo; 26th February 2021 at 09:58 PM..
Old 26th February 2021 | Show parent
  #85
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
Quote:
Originally Posted by Analogue Mastering ➡️
That will be a hell of an algo, as even 2 similar genre track require different treatment, hell even 2 remixes of the same track. AI can’t decide on the sibbilant voice, wether to settle with a bit more ssss or go to dull. It can’t decide if the track needs “more cowbell” a ideal respons curve can’t cater for taste.

That’s the whole point. Ironing out everything in a genre target curve, is not the answer. A mathematical approach based upon BPM for compression settings is not a one size fits all band-aid. That’s the whole point.
I get what you're saying, but I think you're still misunderstanding what ML is to some extent. Think of it as a running, statistical analysis. When you train a learner, you run it through a large battery of inputs and let it modify it's own weights (and sometimes internal structure as well, adding layers, etc) until it correctly generates outputs that match the target dataset, to some desired threshold. Once trained, when you feed it new inputs (where it has no targets), it's running off a bunch of statistical realities inherent to the large dataset of masters it was trained on. It doesn't need to address individual concerns like 'de-essing' in the way we do, it just sort of says 'based on the work of these thousands or millions of mastering jobs, here's what (statistically) mastering engineers would do to these inputs'.

Again, there are innumerable hurdles. But none of them involve possessing intangible stuff like 'instinct' or 'taste' or anything like that. It's purely statistical. Intangible things like 'taste' or 'style' or 'experience' only exist in masters once the mastering engineer actually makes a decision and adjusts something. Those adjustments are concrete realities that manifest themselves in the audio and statistical analysis can access them (to varying degrees of success), particularly if you've got a lot of data.

So my only point is that there isn't anything that inherently disqualifies ML from being able to produce results that we couldn't distinguish from the results people produce. Whether any entity actually pulls it off (and to what degree) is an open question, and one that can only be evaluated subjectively. For one thing, I don't know how many entities would find it worthwhile to invest the necessary resources in talented ML engineers to take the time to gather data and do research and get something like this up and running. Sure, there'll be lots of attempts. And lots and lots of marketing. People love throwing the (colloquial, retail) term "AI" around. But using some simple classifier to do some basic categorization in an algorithm is far, far short of what it would take to really have "ML Mastering".

But ML + available data is powerful. More powerful than people think. To me, it's not at all out of the question that things like this will be a reality at some point. Who knows when or at what cost. Most of that depends on market stuff, ie is anyone actually willing to put the necessary resources into it? Is it worth it? I don't see audio mastering as a big priority for the types of ML/data entities that have the talent to do it well.
Old 26th February 2021 | Show parent
  #86
Lives for gear
 
Analogue Mastering's Avatar
 
Verified Member
🎧 10 years
I do agree with you, but the whole problem with ML is that it’s hyped by marketing, talking about predictive analysis, prevention etc. But they don’t come any further than chatbots that are badly scripted and in general are more annoying than helpful. So I believe in it’s promise but the gap between promise and practise hasn’t been closed yet.

Quote:
Originally Posted by Ragan ➡️
I get what you're saying, but I think you're still misunderstanding what ML is to some extent. Think of it as a running, statistical analysis. When you train a learner, you run it through a large battery of inputs and let it modify it's own weights (and sometimes internal structure as well, adding layers, etc) until it correctly generates outputs that match the target dataset, to some desired threshold. Once trained, when you feed it new inputs (where it has no targets), it's running off a bunch of statistical realities inherent to the large dataset of masters it was trained on. It doesn't need to address individual concerns like 'de-essing' in the way we do, it just sort of says 'based on the work of these thousands or millions of mastering jobs, here's what (statistically) mastering engineers would do to these inputs'.

Again, there are innumerable hurdles. But none of them involve possessing intangible stuff like 'instinct' or 'taste' or anything like that. It's purely statistical. Intangible things like 'taste' or 'style' or 'experience' only exist in masters once the mastering engineer actually makes a decision and adjusts something. Those adjustments are concrete realities that manifest themselves in the audio and statistical analysis can access them (to varying degrees of success), particularly if you've got a lot of data.

So my only point is that there isn't anything that inherently disqualifies ML from being able to produce results that we couldn't distinguish from the results people produce. Whether any entity actually pulls it off (and to what degree) is an open question, and one that can only be evaluated subjectively. For one thing, I don't know how many entities would find it worthwhile to invest the necessary resources in talented ML engineers to take the time to gather data and do research and get something like this up and running. Sure, there'll be lots of attempts. And lots and lots of marketing. People love throwing the (colloquial, retail) term "AI" around. But using some simple classifier to do some basic categorization in an algorithm is far, far short of what it would take to really have "ML Mastering".

But ML + available data is powerful. More powerful than people think. To me, it's not at all out of the question that things like this will be a reality at some point. Who knows when or at what cost. Most of that depends on market stuff, ie is anyone actually willing to put the necessary resources into it? Is it worth it? I don't see audio mastering as a big priority for the types of ML/data entities that have the talent to do it well.
Old 26th February 2021 | Show parent
  #87
Lives for gear
 
Ragan's Avatar
 
🎧 10 years
Quote:
Originally Posted by Analogue Mastering ➡️
I do agree with you, but the whole problem with ML is that it’s hyped by marketing, talking about predictive analysis, prevention etc. But they don’t come any further than chatbots that are badly scripted and in general are more annoying than helpful. So I believe in it’s promise but the gap between promise and practise hasn’t been closed yet.
I think our differing views probably stem from where we approach ML from. I'm assuming (and correct me if I'm wrong) that you're approaching it from a somewhat casual, consumer angle. If that's the case, I can sympathize with picturing ML as mostly being hype and "chatbots". I come at it from the engineering angle and I can assure you, it's a lot more than that. Uses like medical diagnostics, computer vision, drug and pharmaceutical research, and a slough of others, present incredible potential, some of which is theoretical, some of which is already very much concrete. And like any other powerful tool, it's got a bunch of problems too. Like picking up dataset bias in using facial recognition or criminal justice applications or various other things.

But anyway, it's an awful lot more than "chatbots".
Old 26th February 2021 | Show parent
  #88
Lives for gear
 
Analogue Mastering's Avatar
 
Verified Member
🎧 10 years
Quote:
Originally Posted by Ragan ➡️
I think our differing views probably stem from where we approach ML from. I'm assuming (and correct me if I'm wrong) that you're approaching it from a somewhat casual, consumer angle. If that's the case, I can sympathize with picturing ML as mostly being hype and "chatbots". I come at it from the engineering angle and I can assure you, it's a lot more than that. Uses like medical diagnostics, computer vision, drug and pharmaceutical research, and a slough of others, present incredible potential, some of which is theoretical, some of which is already very much concrete. And like any other powerful tool, it's got a bunch of problems too. Like picking up dataset bias in using facial recognition or criminal justice applications or various other things.

But anyway, it's an awful lot more than "chatbots".
I get it, but in general its if you encounter X do Y (in many hyper complex multidimensional variations) my earlier point was that you can’t funnel art through big data fed decision making. The results of ML to fuell art have been disappointing so far. It’s more than solely decision making weighting options.
Even if you would shadow a 1.000 me’s, document their input, their output and the steps they took to get there, you still won’t be able to define control limits driving actions. This is not as simple as ethnical profiling cross checking against 10.000.000 big data points.
Old 27th February 2021 | Show parent
  #89
Lives for gear
 
Trakworx's Avatar
 
Verified Member
3 Reviews written
🎧 10 years
Quote:
Originally Posted by DBarbarulo ➡️
If you, Jerry, and all are witnessing a growth of the market, when you talk with friends and colleagues the mood is the same that is a statistic positive trend.
Right. I thought that was what we were witnessing in this thread. Several of us colleagues here saying the same thing...
Old 27th February 2021 | Show parent
  #90
Gear Maniac
 
🎧 10 years
Quote:
Originally Posted by Ragan ➡️
I get what you're saying, but I think you're still misunderstanding what ML is to some extent. Think of it as a running, statistical analysis. When you train a learner, you run it through a large battery of inputs and let it modify it's own weights (and sometimes internal structure as well, adding layers, etc) until it correctly generates outputs that match the target dataset, to some desired threshold. Once trained, when you feed it new inputs (where it has no targets), it's running off a bunch of statistical realities inherent to the large dataset of masters it was trained on. It doesn't need to address individual concerns like 'de-essing' in the way we do, it just sort of says 'based on the work of these thousands or millions of mastering jobs, here's what (statistically) mastering engineers would do to these inputs'.

Again, there are innumerable hurdles. But none of them involve possessing intangible stuff like 'instinct' or 'taste' or anything like that. It's purely statistical. Intangible things like 'taste' or 'style' or 'experience' only exist in masters once the mastering engineer actually makes a decision and adjusts something. Those adjustments are concrete realities that manifest themselves in the audio and statistical analysis can access them (to varying degrees of success), particularly if you've got a lot of data.

So my only point is that there isn't anything that inherently disqualifies ML from being able to produce results that we couldn't distinguish from the results people produce. Whether any entity actually pulls it off (and to what degree) is an open question, and one that can only be evaluated subjectively. For one thing, I don't know how many entities would find it worthwhile to invest the necessary resources in talented ML engineers to take the time to gather data and do research and get something like this up and running. Sure, there'll be lots of attempts. And lots and lots of marketing. People love throwing the (colloquial, retail) term "AI" around. But using some simple classifier to do some basic categorization in an algorithm is far, far short of what it would take to really have "ML Mastering".

But ML + available data is powerful. More powerful than people think. To me, it's not at all out of the question that things like this will be a reality at some point. Who knows when or at what cost. Most of that depends on market stuff, ie is anyone actually willing to put the necessary resources into it? Is it worth it? I don't see audio mastering as a big priority for the types of ML/data entities that have the talent to do it well.
So the advanced statistical model with a big dataset - "ML" - can replicate the past almost perfectly. With ingrained 'instincts', 'taste' and 'vision'.

Can it innovate? Can it respond to client's queries?

Please don't get me wrong, I am not some anti-AI inquisitor. I would love to have a realtime ML machine with a huge dataset and a possibility to tweak the model to my desire. This would be a super exciting tool and I hope to live to be able to use it in future. As it stands now, I have to trust the 'instinct', 'taste' and 'vision' of the mathematicians/coders/software engineers who tuned the model in the first place. This is not much different to using a preset on, say, TC Finalizer. Am I wrong?

Last edited by vyedmic; 27th February 2021 at 09:19 PM.. Reason: Punctuation
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 28761 views: 3043074
Avatar for zaphod
zaphod 9 hours ago
replies: 41 views: 6815
Avatar for davidgary73
davidgary73 31st July 2017
replies: 99 views: 13357
Avatar for MicAudio
MicAudio 17th April 2021
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump