Quantcast
Classical Editing - A few questions - Page 2 - Gearspace.com
The No.1 Website for Pro Audio
Classical Editing - A few questions
Old 1 week ago
  #31
Lives for gear
 
David Rick's Avatar
 
🎧 15 years
The pedogogical "lineage" of classical musicians was once considered very important: who was one's teacher, who trained that person, and so on. One effect of the recording era is that young musicians now grow up hearing "standard" interpretations of important works and this influences their own ideas of how to play those works as much as their teacher does. Top-tier pianists or violinists once sounded very distinctive; now they sound increasingly homogeneous. It's a kind of "regression to the mean" that makes performances of the "standard repertoire" increasingly predictable (and dare I say "boring"?) over time.

I do not think that training an AI to accelerate this process is a desireable thing. If you simply want the computer to help you organize a bunch of takes, the MuSynC feature in Sequoia will already time-align them for you. Musical decisions should be left to actual humans.
Old 1 week ago | Show parent
  #32
Lives for gear
 
🎧 10 years
Yes we all need to wear some of the blame for 'enbalming' performances, although those of us working with amateur performers who require recordings for review and personal practice purposes...or for getting new compositions heard more widely... are largely exempt from David's accusation.

My AI proposal was intended, but perhaps failed to adequately stress, to advocate a clear cutoff between machine lineup and pre-assembly of takes, and human selection, approval and tweaking of cross fades etc...in other words the final assembly of the edited composite recording.

I would allow a computer to make 'helpful suggestions' (like predictive text), but never delegate aesthetic decisions to it !
Old 1 week ago
  #33
Lives for gear
 
🎧 10 years
I think AI assisted editor will come to us, sooner or later. I, for one will welcome its arrival. I believe it can save lots of my time putting together a very rough suggestive draft of the edit, just like a CNC router can cut out the blanks of the violin for the violin maker from a piece of tree trunk to save him lots of manual labor at the beginning of the process. AI can easily evaluate a take of the music and grade its notes and intonation accuracy, perhaps it can mark the score as what notes and measures of the take one should not even consider using because of the wrong notes and out of tone notes. Maybe it can even tell me where the assemble have some issues, or some voices came in wrong. I don’t need AI to pick the “best” stuff out of the session takes for me but I would certainly trust it to detect “bad” stuff. After all, most of my markings on the score from the session are for the “bad” stuff as to remind myself what not to use in the editing session.
Old 1 week ago | Show parent
  #34
Lives for gear
 
🎧 15 years
Quote:
Originally Posted by dseetoo ➡️
After all, most of my markings on the score from the session are for the “bad” stuff as to remind myself what not to use in the editing session.
An interesting point Da-Hong.

I find when the musicians are good, you mark the bad stuff. When the musicians are not good, you mark the good stuff.
Old 1 week ago | Show parent
  #35
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by David Spearritt ➡️
An interesting point Da-Hong.

I find when the musicians are good, you mark the bad stuff. When the musicians are not good, you mark the good stuff.
That is funny but true.
Old 1 week ago | Show parent
  #36
Lives for gear
 
🎧 10 years
Maybe it (AI-assisted) wouldn't recognise 'the good stuff buried within the bad', in case there were a few notes needed which were nowhere else to be found....but since it's tasked to operate within the non-destructive framework, the material is still there to be found (by the diligent human)

However, I sense the collective shiver down the spine by editors who see this as the thin end of the wedge, the twilight of the artisan, etc etc...and who never believed that 'AI could ever be so I '....

Gentlemen, take up your axes.....https://www.bbc.com/news/magazine-17770171
Old 1 week ago | Show parent
  #37
Lives for gear
 
Sharp11's Avatar
 
🎧 15 years
Quote:
Originally Posted by David Rick ➡️
The pedogogical "lineage" of classical musicians was once considered very important: who was one's teacher, who trained that person, and so on. One effect of the recording era is that young musicians now grow up hearing "standard" interpretations of important works and this influences their own ideas of how to play those works as much as their teacher does. Top-tier pianists or violinists once sounded very distinctive; now they sound increasingly homogeneous. It's a kind of "regression to the mean" that makes performances of the "standard repertoire" increasingly predictable (and dare I say "boring"?) over time.

I do not think that training an AI to accelerate this process is a desireable thing. If you simply want the computer to help you organize a bunch of takes, the MuSynC feature in Sequoia will already time-align them for you. Musical decisions should be left to actual humans.
This is a very good point, but it shows up in all the arts today - from photography, acting, electronic and orchestral music, films etc.

Too much easy exposure via media, and no one curating or creating syllabi leads to a lot of sameness and copying in the arts. The days of Ansel Adams, Gordon Parks, Edward Hopper, Stella Adler and Aaron Copland - just off the top of my head, artists that had an individual voice and a long path to success, are long gone, for the most part.

It’s a plus and a minus, but it’s what it is in 2022.
Old 1 week ago | Show parent
  #38
Lives for gear
 
king2070lplaya's Avatar
They’re still out there, if you look hard enough, but you’re right that circumstances certainly are different. I always try to remember though that every era has had its chaff, and that we remember individuals throughout history through a long lens. And many great artists in history haven’t been as well remembered as their peers! My takeaway? Continue to seek out the art you like, history will decide what’s great and what ain’t, but that shouldn’t have too much impact on your enjoyment. Otherwise you turn into the hipster who drank his coffee before it was cool and got burnt.
Old 1 week ago | Show parent
  #39
Here for the gear
 
This thread has got some very interesting responses so far.

I have to say, I'm somewhat surprised by how much editing is apparently being done in some pieces of classical music. I've personally never seen the need to edit individual notes as opposed to just re-recording a phrase.

Then again, I understand that a very large ensemble like a symphonic orchestra, may not always have the luxury of time or space to record multiple takes.
Old 1 week ago | Show parent
  #40
Gear Head
 
Wavefront's Avatar
 
Quote:
Originally Posted by David Rick ➡️
Top-tier pianists or violinists once sounded very distinctive; now they sound increasingly homogeneous. It's a kind of "regression to the mean" that makes performances of the "standard repertoire" increasingly predictable (and dare I say "boring"?) over time.
This sentiment was on my mind very much during the project I mentioned earlier in this thread. The pianist, who is of an older generation and is quite a character, has some extremely idiosyncratic (though very tasteful and well-executed) interpretations, and I found myself constantly filled with surprise and enjoyment by the freshness and artistry he brought to the project. It made the editing process far more stimulating.
Old 1 week ago | Show parent
  #41
Lives for gear
 
Plush's Avatar
 
5 Reviews written
🎧 15 years
Over editing, like over micing (using 50 microphones) is to be verboten and frowned upon. Maybe that player or that group that needed all that micro surgery was not ready to record.

There have been times when I have sent the people home.
Old 1 week ago | Show parent
  #42
Gear Guru
 
Brent Hahn's Avatar
 
1 Review written
🎧 15 years
Quote:
Originally Posted by Sharp11 ➡️
The days of Ansel Adams, Gordon Parks, Edward Hopper, Stella Adler and Aaron Copland - just off the top of my head, artists that had an individual voice and a long path to success, are long gone, for the most part.

It’s a plus and a minus, but it’s what it is in 2022.
A songwriters' workshop here recently held a Master Class on "Licensing Your Songs in TV and Movies." It was anchored by this woman who is a heavy hitter and major gatekeeper in the business. She'd listen to song submissions, usually cutting them off after about 15 seconds, and offer her expertise. Among her comments (transcribed verbatim from my Zoom recorder):

• "Is that Adult Contemporary or Pop? If I can't put it in a slot I can't sell it."

•"That thing at the top -- is that a verse or a chorus or a prechorus or what? You need to learn the basics."

•"That's too Country to be Americana and there's not enough autotune."

• "This guy Bear McCreary, when he picks tracks for a show, he might pick something like a J-Pop track that's entirely in Japanese so you can't tell what it's actually about, which is perfect!"
Old 1 week ago
  #43
Lives for gear
 
David Rick's Avatar
 
🎧 15 years
It's well before noon, I've had nothing interesting to drink, and I'm still feeling nauseous. I can't be pregnant, Brent, I'm a boy!

Last edited by David Rick; 1 week ago at 09:18 PM.. Reason: typo
Old 5 days ago | Show parent
  #44
Here for the gear
 
Quote:
Originally Posted by Plush ➡️
Maybe that player or that group that needed all that micro surgery was not ready to record.
Yes, I think this would apply regardless of genre. I'm not sure if editing would substitute a good performance, even with the most advanced editing tools.
Old 5 days ago | Show parent
  #45
Lives for gear
 
Yannick's Avatar
 
🎧 15 years
IMO good editing can enhance an already good performance. It also allows the musicians to take more risk (than playing a live concert), which sometimes results in something magical, which only happens rarely in concert (if it needs to be clean as well).

Re AI editing : I think it will never come. The differences between good takes are sometimes quite subtle. Editing two good takes together, which actually do not match musically seems unavoidable.

What would be the biggest help is automatic, near-perfect crossfades. I can actually explain how to do them (visually), so what's the problem programming them ?
Old 5 days ago | Show parent
  #46
Gear Guru
 
joelpatterson's Avatar
 
2 Reviews written
🎧 15 years
Quote:
Originally Posted by Yannick ➡️
....

What would be the biggest help is automatic, near-perfect crossfades. I can actually explain how to do them (visually), so what's the problem programming them ?
I would swear, and I'm not generally a swearing guy, that somehow, ever since I upgraded to Digital Performer 10, the process I used to go through (selecting a range, cutting and pasting, crossfading, and then fine tuning because it wasn't quite perfect) has been automated-- it really is as if the computer sees what I'm trying to do (replace the faulty note with the new proper note) and sees to it that the timing is right.

Which is obviously the ideal realm for the application of AI: "You know what I'm trying to do, yet as a faulty human I am almost getting it right-- so here, can you take care of this for me and save us both a lot of time?"
Old 5 days ago | Show parent
  #47
Gear Guru
 
1 Review written
🎧 5 years
assuming musicians do take some 'risks' - which is something i would (experienced) musician to do or else i get bored pretty quickly i have to admit - imo there's more magic in a live performance than any studio recording can ever achieve.

clever editing and mixing then can (maybe) emphasize the artistic expression (or contribute to perserving it to some degree) but i can't think of or imagine any editing turning a mediocre performance into a stunning recording: technical perfection (in terms of editing) alone won't do/is prerequisite for any production to be released.

regarding costs: i'm getting hired by orchestras to do whatever it takes to make them sound good and the producer decides what's good (enough) to get used for a recording; based on that, the producer (with my input) then can make an educated guess on a time frame within which the recording can get edited/mixed/completed - meaning: these costs can get calculated and are way below figures menitoned previously in this thread...

...but then, this illustrates a different approach to recordings altogether.




p.s. machine learning? - i'm in this business to communicate and interact with human beings but i'll happily tell you when artificial intelligence has once again sent me in a traffic jam! ;-)

Last edited by deedeeyeah; 5 days ago at 04:27 PM..
Old 5 days ago | Show parent
  #48
Lives for gear
 
🎧 10 years
Quote:
Originally Posted by joelpatterson ➡️
I would swear, and I'm not generally a swearing guy, that somehow, ever since I upgraded to Digital Performer 10, the process I used to go through (selecting a range, cutting and pasting, crossfading, and then fine tuning because it wasn't quite perfect) has been automated-- it really is as if the computer sees what I'm trying to do (replace the faulty note with the new proper note) and sees to it that the timing is right.

Which is obviously the ideal realm for the application of AI: "You know what I'm trying to do, yet as a faulty human I am almost getting it right-- so here, can you take care of this for me and save us both a lot of time?"
Which begs the question...how was DP taught to anticipate your requirements, and how much machine learning was required to get it to this stage of sophistication ?

What if DP went to graduate school and was subjected to 1000's more hours (or days or weeks) of 'field data acquistion boot-camp'....how much better could it get at the job, etc etc ?

Why stop at perfection....when the robot is capable of better than that ? At what point is the human navigator fully redundant ?
Old 5 days ago | Show parent
  #49
Lives for gear
 
Sharp11's Avatar
 
🎧 15 years
Quote:
Originally Posted by studer58 ➡️
Which begs the question...how was DP taught to anticipate your requirements, and how much machine learning was required to get it to this stage of sophistication ?

What if DP went to graduate school and was subjected to 1000's more hours (or days or weeks) of 'field data acquistion boot-camp'....how much better could it get at the job, etc etc ?

Why stop at perfection....when the robot is capable of better than that ? At what point is the human navigator fully redundant ?
If you drive a modern car, you’ve experienced “machine learning” in the form of a drive by wire throttle - the car learns how you drive and will adjust its sensitivity and response to your inputs.

I use ozone 9 as my primary mastering program for delivering stems to my clients - it has a decent AI (learning) algorithm that gets me in the ball park as a starting place - saves me time and my ears for fine tuning and meeting the requirements that work best for the shows I work on, I think it’s great, but it’s completely up to the user if one wants to use it or not.
Old 5 days ago | Show parent
  #50
Lives for gear
 
Plush's Avatar
 
5 Reviews written
🎧 15 years
Sequoia already has built in something called “MusyC”
This feature finds you the same music portions in multiple takes and lines them up for you to audition against the basis take.
Old 5 days ago | Show parent
  #51
Lives for gear
 
Yannick's Avatar
 
🎧 15 years
When I am done recording, all my takes are already synced up, so I fail to see the functionality of this one …
Old 5 days ago
  #52
Lives for gear
 
Plush's Avatar
 
5 Reviews written
🎧 15 years
No they are not lined up!
I’m talking about a feature where you isolate a portion of the recording that you want to edit. The snippet runs for 21 seconds. Let’s say you have 7 takes of that portion of music loaded into Sequoia. MusyC finds all the other similar 21 second portions in the 7 takes and lines them up for you to audition.
Old 5 days ago | Show parent
  #53
Lives for gear
 
David Rick's Avatar
 
🎧 15 years
Quote:
Originally Posted by Plush ➡️
Let’s say you have 7 takes of that portion of music loaded into Sequoia. MusyC finds all the other similar 21 second portions in the 7 takes and lines them up for you to audition.
...and to elaborate, it presents them in visual alignment even if the tempi and resulting clip lengths are slightly different. It's a big help when navigating a long and complex piece.
Old 5 days ago | Show parent
  #54
Lives for gear
 
Sharp11's Avatar
 
🎧 15 years
Fwiw, I love editing, to me, making a recording is a lot like making a movie - you compile all the “footage” you feel you might want to work with, then edit.

Of course making a recording can be a one-take, spontaneous affair - I’ve done plenty of these, but sometimes I’ll improvise a piano piece, and it’s really good, but I may feel I should’ve repeated a two bar phrase, so instead of copy and pasting it, I’ll walk to the piano and replay just those two bars, and edit it in. Sometimes many years later (I once edited a new phrase into a piece I’d done 20 years earlier, using the same piano, same mics, but different preamps, it sounded exactly the same).

It’s all about the skill and sensibilities of the musicians and editors (sometimes one person), at the end of the day, if it sounds great and of a whole, your work is done. That’s all that matters, the beholder shouldn’t be aware of the recording and editing process, they should just connect with the work.
Old 5 days ago | Show parent
  #55
Gear Head
 
Wavefront's Avatar
 
Quote:
Originally Posted by David Rick ➡️
...and to elaborate, it presents them in visual alignment even if the tempi and resulting clip lengths are slightly different. It's a big help when navigating a long and complex piece.
I would really appreciate this functionality in Pyramix . . .
Old 5 days ago | Show parent
  #56
Gear Guru
 
joelpatterson's Avatar
 
2 Reviews written
🎧 15 years
Quote:
Originally Posted by studer58 ➡️
....At what point is the human navigator fully redundant ?
Oh, believe me, I am terrified of all the implications in play here... but I think in my case it's merely a user-friendly editing propensity (this would be a great stock phrase for it, like the way your phone will guess at the next word you want to type) by seeing two waveforms at the heart of a cut-and-paste edit, and defaulting to aligning the "new" waveform at the precise location of the "old" waveform-- I mean... some reason not to?

But I really dread the day the computer will talk to me, ask me about these decisions, argue a case of its own...
Old 5 days ago | Show parent
  #57
Lives for gear
 
Yannick's Avatar
 
🎧 15 years
Quote:
Originally Posted by Plush ➡️
No they are not lined up!
I’m talking about a feature where you isolate a portion of the recording that you want to edit. The snippet runs for 21 seconds. Let’s say you have 7 takes of that portion of music loaded into Sequoia. MusyC finds all the other similar 21 second portions in the 7 takes and lines them up for you to audition.
Yes they are in my workflow.
Attached Thumbnails
Classical Editing - A few questions-edit01e.jpg  
Old 5 days ago | Show parent
  #58
Lives for gear
 
Yannick's Avatar
 
🎧 15 years
Quote:
Originally Posted by David Rick ➡️
...and to elaborate, it presents them in visual alignment even if the tempi and resulting clip lengths are slightly different. It's a big help when navigating a long and complex piece.
Yes, but I do not like the fact it aligns the tempi as well.
For me, it is a big aid that I can see minute timing differences without auditioning, as well as which takes are generally too slow or too quick.
Old 5 days ago | Show parent
  #59
Lives for gear
 
Yannick's Avatar
 
🎧 15 years
Quote:
Originally Posted by Wavefront ➡️
I would really appreciate this functionality in Pyramix . . .
I would vastly prefer a quasi intelligent crossfade, as an addition to the point-source editing.

As it is now, there is a big offset on the in points versus the out points. About 80% are off in the same direction. Anyone else gets this ?
Old 18 hours ago | Show parent
  #60
Here for the gear
 
Quote:
Originally Posted by Yannick ➡️
What would be the biggest help is automatic, near-perfect crossfades. I can actually explain how to do them (visually), so what's the problem programming them ?
Can you give an example of how you are doing crossfades visually?

I've often seen that the waveform can be misleading when editing, so careful listening is required to achieve the best result.
📝 Reply

Similar Threads

Thread / Thread Starter Replies / Views Last Post
replies: 59 views: 6187
Avatar for SurveillanceP
SurveillanceP 24th March 2010
replies: 114 views: 20642
Avatar for blueNan
blueNan 22nd September 2021
replies: 2017 views: 218269
Avatar for analogholic
analogholic 19 hours ago
Post Reply

Welcome to the Gearspace Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…

Forum Jump
Forum Jump