Hi,
I'm trying to create a custom bottom toolbar for my app and want to use same fade-blur effect as iOS uses under navigation and tab bars. Having trouble doing that.
Here is what I tried:
Screenshot 1: putting my custom view in a toolbar/ToolBarItem(placement: .bottomBar). This works only in NavigationStack, and it adds a glass pane that I do not want (I want to put a custom component there that already has correct glass pane)
Screenshot 2: using safeAreaBar or safeAreaInset in any combination with NavigationStack and/or .scrollEdgeEffectStyle(.soft, for: .bottom). Shows my component correctly, but does not use fade-blur.
Can you please help me to find out the correct way of doing that? Thanks!
^ Screenshot 1
^ Screenshot 2
Test code:
struct ContentView2: View {
var body: some View {
NavigationStack {
ScrollView(.vertical) {
VStack {
Color.red.frame(height: 500)
Color.green.frame(height: 500)
}
}
.ignoresSafeArea()
.toolbar() {
ToolbarItem(placement: .bottomBar) {
HStack {
Text("bottom")
Spacer()
Text("text content")
}
.bold().padding()
.glassEffect().padding(.horizontal)
}
}
}
}
}
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
How to show photos from PHPickerViewController the way they are shown in Apple's Photos with "View Full HDR" enabled? I've found all EDR-related talks, rendering CIImage into MTKView already... nothing helps, image is same as in UIImageView. How?! :–)
What I do now:
I get photo URL (copy) via provider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier
I create MTKView with metalLayer.wantsExtendedDynamicRangeContent = true and other recommended settings
I load CIImage from URL provided earlier.
I render CIImage via CIContext backed with mtlCommandQueue with option .useSoftwareRenderer: false.
And I still get "normal" image.
Exact same image is being displayed in Photos app with much brighter whites, and this is exactly what I want to achieve.
Please help :)
Thanks!
Hi,Just found out that MLWordTagger can help my application in a huge way. Create ML is simple, that is awesome! Everything works great, but I do not see fast training. Creating MLWordTagger barely uses one core. Why is that? Can it be faster? Maybe my question is dumb, but I'm new to this.BTW, there is basically no information on this subject in the Internet. Why? This is so incredibly powerful feature!Here is my code:private func train(trainingDataUrl: URL, validationDataUrl: URL, outputTo modelUrl: URL) {
let trainingData = try! MLDataTable(contentsOf: trainingDataUrl)
let testingData = try! MLDataTable(contentsOf: validationDataUrl)
let model = try! MLWordTagger(
trainingData: trainingData,
tokenColumn: "tokens",
labelColumn: "labels",
parameters: MLWordTagger.ModelParameters(validationData: testingData)
)
try! model.write(to: modelUrl)
}